Pb with Poisson & Gamma

First surprise : I expected the mean to be around 1, not 2… !?

Regarding your first model, I would think this is just a result of your data. When you give such little data (e.g. 1 observation) your posterior will be heavily guided by your prior since it isn’t able to learn much… this explains why the mu parameter is much bigger than 1. When I run your model with many more observations of 1’s, we see this is in fact the case and mu centers around 1.

Which is totally different from the previous model !

In your first model you chose a HalfFlat prior with a Poisson likelihood, while in your second model you just wrote a Gamma prior without a likelihood. Your first model is computing the posterior after observing a single value, but all your second model is doing is sampling randomly from the Gamma prior distribution (you never passed in observed = ).

If you want to do what I think you’re trying to do (compare the shape of the prior with the posterior?), you’ll need to write down a model that uses a Gamma prior and a Poisson likelihood. For example,

with pm.Model() as model:
    gamma = pm.Gamma('gamma', alpha=1, beta=1) # Prior
    p_obs = pm.Poisson('p_obs', mu=gamma, observed=[1]) # Likelihood
    
    trace = pm.sample(tune=2000,draws=10000,target_accept=0.9) 

Then you can do something like

with model:
    ppc = pm.sample_prior_predictive(var_names=["gamma"])
az.plot_ppc(ppc, var_names=["gamma"])

to look at the prior Gamma distribution and

with model:
    ppc = pm.sample_posterior_predictive(trace, var_names=["p_obs"])
az.plot_ppc(ppc, var_names=["p_obs"])

to look at the posterior distribution. In your case you should find that the posterior is also a Gamma distribution because of conjugacy, although I’m not sure how easy it would be to verify this fact.

Also in case you didn’t know this, since you have a nice model with a conjugate prior you could just compute the parameters of the posterior distribution directly instead of using MCMC via PyMC.

Hope this is useful!

2 Likes