Percentiles and mean distribution of sample_posterior_predictive

question i asked some time ago and then later i solved it my self, maybe it helps someone else as well.

So the question was …

I want to calculate the percentiles of the posterior_predictive.
I have acquired sample_posterior_predictive using the
y_pred_g = pm.sample_posterior_predictive(trace_g_pt234, 1000, model_g)

Now i want to get the mean of draws and then (1,5,50,95,99) percentile. Is there are suggestion. or any already made implementation inside pymc3 or Arviz.(using xarray data array)

The answer is

posterior_predictive analysis

#getting the posterior_predictive and then making mean across

#making 5th quantile
quant_5=np.mean(np.quantile(y_pred_g[“y”], 0.05, axis = 1))

Best Regards

1 Like

If using InferenceData, you can also use

idata.extend(az.from_pymc3(posterior_predictive=pm.sample_posterior_predictive(...)), inplace=True)
idata.posterior_predictive.mean(("chain", "draw"))
idata.posterior_predictive.quantiles((.01, .05, .5, .95, .99), dim=("chain", "draw"))

this has at least 3 advantages: 1) it will also work when there are multiple observed variables. 2) iit adds the posterior predictive samples to the inferencedata object, and you can store it together with the posterior samples and sample stats using netcdf or zarr, 3) having the posterior predictive samples in the inferencedata object, you can also take advantage of model criticism tools like arviz.plot_ppc

Some extra side notes. The 1/99 percentiles are quite extreme, be sure to check the quantile ess and mcse at that quantile before reporting these values, nuts performs much better at the bulk of the distribution than at its tails, and chains that seem to have mixed well looking at the bulk may still not have explored the tails completely. Also consider not using 1000 and let sample_posterior_predictive get as many samples as there are in the posterior, especially in cases where you want to compute info on quantiles it is important to not discard samples.

1 Like

Thanks for the reply, this leads to three points :slight_smile:
This looks better no doubt and originally the reason i asked the question on the forum.

Point2: very rightly pointed out, 99 % is an extereme and i am trying to do extreme value analysis.
I have a crude understanding that the bayesian methods esp posterior_predictive has very less samples on the ends/tails of the distribution.
Do you know of any method/way to increase the accuracy of method at the ends.
Ingeneral the target is model water levels in rivers for extreme value events (1 in 100 years flood).

Point 3
Is there a Arviz construction with which i can plot a predictive CDF. A do a mapping for each event (meaning at water height 5m what is the CDF value).


1 Like

I would recommend making sure that az.summary(trace, hdi_prob=.98) shows good quantile ess and mcse (the default is .94). You can also call ess/mcse directly and use method="quantile" or tail.

You may also want to take a look at both quantile and local ess and mcse plots.

The main reference on these plots is GitHub - avehtari/rhat_ess: Rank-normalization, folding, and localization: An improved R-hat for assessing convergence of MCMC (this has links to the paper, supplementary materials, source code…). So definitely take a look on recommended diagnostics related to quantiles.

In addition, I have found that in many cases, having quantile/local ess plots with a U-like shape often indicate convergence issues, even if all quantiles are above 400. In such cases, I would recommend looking also at the evolution ess plot and running the sampler longer, if mixing is not too good ess can be overestimated at first which results in evolution ess plots that at first seem to increase linearly but then have a sudden drop. But right now I don’t remember how much of this has an actual rationale behind and is covered in the paper and what is purely my personal experience. I hope it helps.

GOT it …i will surely look into the literature .

Can you also indicate what can i do regarding the plotting of the CDF of predictive.
This can help rationalizing things more practically.