This is more a sort of general overview question on if this even makes sense. Say I have a classification algorithm (It is based on research and the parameters are defined, thus not probablistic.)
Now say I use this algorithm to classify some data and I determine which are the correct (denoted as a 1) and which are the false (denoted as a 0) predictions. Now I have input data that go into my algorithm and lead to two possible outputs, a good prediction and a bad prediction.
When I train a bayesian logistic regression model on the data and determine the chance of a good prediction, does this makes sense? Especially in higher dimensions. as the data would not be easily linearly separable. If it does make sense. Would it also make sense for a standard logistic regression model?
Perhaps this is a bit of an off question, but I want to be able to tell how certain I can be for a prediction of the algorithm, but as the algorithm is already determined I cannot make it a bayesian algorithm.