Does this make sense? Verifying a classification algorithm with a baysian logistic regression model

This is more a sort of general overview question on if this even makes sense. Say I have a classification algorithm (It is based on research and the parameters are defined, thus not probablistic.)

Now say I use this algorithm to classify some data and I determine which are the correct (denoted as a 1) and which are the false (denoted as a 0) predictions. Now I have input data that go into my algorithm and lead to two possible outputs, a good prediction and a bad prediction.

When I train a bayesian logistic regression model on the data and determine the chance of a good prediction, does this makes sense? Especially in higher dimensions. as the data would not be easily linearly separable. If it does make sense. Would it also make sense for a standard logistic regression model?

Perhaps this is a bit of an off question, but I want to be able to tell how certain I can be for a prediction of the algorithm, but as the algorithm is already determined I cannot make it a bayesian algorithm.

If I understand correctly, you have a non-probabilistic classifier, and you want to extract the uncertainty of the decision from the classifier somehow?

Yes indeed. That is my aim. To sort of quantify a specific classification.

I see. For your purpose, my suggestion is that you can introduce small random noise to the input and see how the classification label turbulence. If you are working with neural network alike you can use dropout to get the uncertainty of the classification label (e.g., see The dropout idea can be used in non-neural network as well - for example if your classifier has a set of weight you can just manually setting some weight to zeros and see how the classification label change.