How to get a smoother mean posterior predictive plot over long tails?

Hello,

I’m trying to fit a distribution to my data while incorporating prior knowledge. Based on my limited knowledge of Bayesian modeling, this would be equivalent to obtaining the posterior predictive distribution. This is my model:

        data = np.array([1, 1, 1, 62, 56, 40, 7600, 2, 7, 1, 43, 98, 76])
        with pm.Model() as example:
            x = pm.Data('x', data, mutable=True)

            alpha = pm.TruncatedNormal('alpha', mu=1.02, sigma=1, lower=1.05, upper=1.52)
            likelihood = pm.Pareto('likelihood', alpha=alpha, m=x.min(), observed=x)

        with example:
            trace = pm.sample(idata_kwargs=dict(log_likelihood=True),)
            pm.sample_posterior_predictive(trace, extend_inferencedata=True)

I plotted the mean posterior predictive using az.plot_ppc(), but the plot looks jagged regardless of the num_pp_samples used (2nd image below). Looking at the Axis object returned by plot_ppc(), only 512 points are plotted while the long tails of a Pareto distribution cover a huge range of points.

While looking for alternatives, I thought, isn’t everything in my model feeding into my likelihood variable? Wouldn’t the likelihood variable post-sampling be equivalent to the posterior predictive? I plotted pm.logp(likelihood).exp(), but it looks significantly different from az.plot_ppc() (plot below). Perhaps there’s a conceptual difference between the two methods that I’m not picking up on.


Here’s how I plotted the two distributions for reference:

        # Plot density via likelihood variable's logp function
        domain = np.linspace(0, data.max()+10, 1_000_000)
        probs = pm.logp(likelihood, domain).exp().eval()
        plt.plot(domain, probs, label='pm.logp()')
        # Plot density via Arviz's plot_ppc
        ppc = az.plot_ppc(trace, observed=False, num_pp_samples=4_000, show=False); plt.close()
        x0, dens0 = ppc.lines[-1].get_xdata(), ppc.lines[-1].get_ydata() # lines[-1] corresponds to mean posterior predictive
        plt.plot(x0, dens0, label='az.plot_ppc()')
        # Plot observations for reference
        plt.scatter(data, data*0, color='black', label='Observations'); plt.xlabel('x'); plt.ylabel('Density'); plt.legend()

To summarize, there are two points I’m not clear on:

  • Why is the plot given by az.plot_ppc() different from the one by pm.logp()?
  • Is there a way to make the Arviz plot smoother?

Appreciate any help with this!

The ppc plot itself looks fine but it is the mean that looks weird:
ppc

ppc mean line plot has been known to act weird, see for instance:

I have had similar issues from time to time so I am assuming this is not fixed but there seems to be some suggestions in the links above (my solution is not to plot it!).

Just to double check, I have also ran this model on simulated data and it does seem to run fine in terms of ppc and posterior but the mean line is still bad:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Mar  7 10:30:58 2024

@author: avicenna
"""

import numpy as np
import pymc as pm
import arviz as az
seed = 0
rng = np.random.default_rng(seed)
alpha_real = 1.35
N=100


with pm.Model() as sim:
  data_dist = pm.Pareto("data_dist", alpha=alpha_real, m=1)

data = pm.draw(data_dist, N)


with pm.Model() as example:
  x = pm.Data('x', data, mutable=True)

  alpha = pm.LogNormal("a", )
  likelihood = pm.Pareto('likelihood', alpha=alpha, m=x.min(), observed=x)

with example:
  trace = pm.sample(1000, tune=1000, chains=6)

az.plot_posterior(trace)

with example:
  pm.sample_posterior_predictive(trace, extend_inferencedata=True)

ax = az.plot_ppc(trace)#
ax.get_figure().savefig("ppc3.png")

ax = ax.plot_posterior(trace)
ax.get_figure().savefig("post.png")

ppc3

post

1 Like

Thanks! I was able to adapt the solution in that old thread. I get a smoother mean posterior predictive when I clip the low-density regions like this:

trace.posterior_predictive['likelihood'][:] = np.clip(trace.posterior_predictive['likelihood'][:], 0, cutoff)

Which leaves this point

Even after smoothing the Arviz plot, it’s still different from the pm.logp() plot. I feel like I’m missing some nuance, but shouldn’t they be the same? The likelihood variable in my model follows a distribution whose parameters follow prior distributions and is conditioned on observations. Isn’t that by definition a posterior?

The value of

pm.logp(likelihood, domain).exp().eval()

depends on “alpha”. When you leave .eval() empty for a RV, it I assume uses a “random” number in its place (random in quotation marks because I don’t know how it is assigned probably the assumed initial value in the model?). If you use instead

probs = pm.logp(likelihood, domain).exp().eval({"alpha":1.35})

It will likely match better.

2 Likes

It’s just based on whatever the random state happened to be when the model got built. It should change between runs if you reset the kernel, but it will always stay the same within runs (because the random state isn’t updated between calls to .eval())

2 Likes

Ah! So I still need to pass my parameter’s posterior mean to .eval(). You’re right, now it’s very close to the smoothed Arviz mean posterior predictive. Plotting it this way is much easier than wrestling with Arviz’ plot_ppc().
The way I see it, plot_ppc() is useful for visualizing the distributional nature of the posterior predictive (ie, the countless blue densities), but if you want to plot the mean posterior predictive, evaluating logp() over the desired range is much cleaner.

Thank you both! The issue is now resolved.

if you plan to evaluate the logp at several points, you can compile the function with model.compile_logp. .eval() will recompile the logp graph into a function on every call, so it will get pretty sluggish for complex models. The function it returns will expect a single argument, a dictionary of variable name keys and array values.

Be aware that some of the variables that the logp function expects won’t be what you expect, because of how variables get transformed. You can check what is actually expected by looking at the keys of model.initial_point()

1 Like