That’s the idea. The conventional practice of producing a posterior predictive distribution for the observed data (the data originally used for inference) is to evaluate whether you model+your posterior distribution over the model parameters can (approximately) reproduce the data set. If not, then you should question both your model and the posterior. But this is all a method for doing diagnostics on your model. You can also use your model+posterior to generate predictions about new scenarios, often in the form of new predictor values (e.g, see this notebook on on out of sample predictions). But if you want to generate a prediction for a scenario in which there is a single, hypothetical person, that’s fine. Or, if you wanted, you could generate predictions about a whole range of hypothetical people in a single go. The possibilities are endless!
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Posterior predictive p-value | 2 | 547 | October 2, 2019 | |
| Trouble understanding sample_ppc | 4 | 651 | June 7, 2018 | |
| Simulating Fake Data from pymc3 model | 1 | 1783 | October 28, 2017 | |
| Evaluating posterior predictive for classifying new data | 1 | 443 | December 23, 2019 | |
| Posterior predictive distribution | 6 | 3492 | May 3, 2018 |