Posterior predictive sampling with data variance

That’s an interesting way to think of it… maybe it’s my finance background, but if I were offered to make a bet where the expected payoff is 5$ (i.e. the expected value), even though the most probable value (peak of the probability density) is -5$, I would take the bet. If instead the expected value were negative, but the peak of the density curve were positive, I would not. So I still think that even in the example you discuss, expected value is a more meaningful value than where that peak probability is.

In any case, I understand that for your specific problem you need a more complex representation of the predictions. I’m not clear, however, what is that more complex representation you’re looking for. A prediction cone from quantiles on the sample_ppc output is not enough? Visualizing the samples themselves is not precise enough? Is it that you need to calculate some measure of goodness of fit to tweak your model?

Regarding @twiecki’s comment about find_MAP, perhaps you’re right and I misinterpreted, so I’ll abstain from commenting further about this: I don’t want to be putting words in his mouth.

1 Like