How to verify that uncertainty (estimated from pymc3) is accurate?

@junpenglao thanks for sharing, it discusses a lot of the issues I’ve had using Cook’s test.

The mathematical setup makes sense to me but I’m still trying to suss out though how the quantile calculation differs from Cook’s test. Is the rank statistic calculation (unnumbered equation right after Algorithm 1) any different from Cook’s U_i = np.sum(samples < true_params_values) / num_samples that I noted above? From what I can tell it’s just counting how many posterior samples are below the true value, which is just the quantile of the true value within the posterior CDF. Maybe I’m misunderstanding their f, it wasn’t well defined in their previous sentence.

And are the resulting histograms just a different way to view the quantile-quantile plots in Figure 1b here?