Hello,
As the topic says, i am interested in modelling logistic regression output p(C=1|X) uncertainty?
So given some input feature vector X=(X_1,…,X_n), i would like to know p(C=1|X), but also uncertainty as in modelling coin throw with Beta distribution? I.e. for two “similar” input, there could be same expectation E[p(C=1|X_1)] = E[p(C=1|X_2)] = 0.3, but var[p(C=1|X_1)] != var[p(C=1|X_2)]?
If we model this with “standard” Bayesian logistic regression (with Bernoulli observed RV), posterior predictive distribution will provide us with realisations of Bernoulli RV - in this case, we can only estimate p as mean of those realisations, while variance is defined as p*q, meaning that we are more certain about the classification output that is closer to 1 or 0, but we do not have hierarchical modelling of p.
Am I missing something here?