 # Pb with Poisson & Gamma

I’d like to use a Poisson distribution and it’s conjugate prior Gamma…

But I’m having problem to understand a basic example.
This is the sampling of ‘mu’ for a Poisson distribution with one observed value = 1.

``````with pm.Model() as model:
mu = pm.HalfFlat('mu')
p_obs = pm.Poisson('p_obs',mu=mu,observed=)

trace = pm.sample()#tune=2000,draws=10000,target_accept=0.9)

pm.traceplot(trace)
``````

This is what I get :

And
`trace['mu'].mean() = 1.993718...`

First surprise : I expected the mean to be around 1, not 2… !?

Then I try this other way to sample mu with the same data (observed value =1 , seen once) directly with a Gamma which is the Poisson conjugate prior.

``````with pm.Model() as model:
mu = pm.Gamma('mu',alpha=1,beta=1)

trace = pm.sample() #tune=2000,draws=10000)#,target_accept=0.9)

pm.traceplot(trace)
``````

Which shows this posterior :

This does not correspond to the previous curve !! I don’t understand why…

And when it comes to the mean of the samples:

`trace['mu'].mean() = 1.0148405502...`

Which is totally different from the previous model !
It’s closer to what I think it should be (regarding the observed data), but I don’t understand why this two models give far different results…

I don’t know which model is wrong, and why.

Any help will be appreciated.

First surprise : I expected the mean to be around 1, not 2… !?

Regarding your first model, I would think this is just a result of your data. When you give such little data (e.g. 1 observation) your posterior will be heavily guided by your prior since it isn’t able to learn much… this explains why the `mu` parameter is much bigger than 1. When I run your model with many more observations of 1’s, we see this is in fact the case and `mu` centers around 1.

Which is totally different from the previous model !

In your first model you chose a `HalfFlat` prior with a `Poisson` likelihood, while in your second model you just wrote a `Gamma` prior without a likelihood. Your first model is computing the posterior after observing a single value, but all your second model is doing is sampling randomly from the `Gamma` prior distribution (you never passed in `observed = `).

If you want to do what I think you’re trying to do (compare the shape of the prior with the posterior?), you’ll need to write down a model that uses a `Gamma` prior and a `Poisson` likelihood. For example,

``````with pm.Model() as model:
gamma = pm.Gamma('gamma', alpha=1, beta=1) # Prior
p_obs = pm.Poisson('p_obs', mu=gamma, observed=) # Likelihood

trace = pm.sample(tune=2000,draws=10000,target_accept=0.9)

``````

Then you can do something like

``````with model:
ppc = pm.sample_prior_predictive(var_names=["gamma"])
az.plot_ppc(ppc, var_names=["gamma"])
``````

to look at the prior `Gamma` distribution and

``````with model:
ppc = pm.sample_posterior_predictive(trace, var_names=["p_obs"])
az.plot_ppc(ppc, var_names=["p_obs"])
``````

to look at the posterior distribution. In your case you should find that the posterior is also a `Gamma` distribution because of conjugacy, although I’m not sure how easy it would be to verify this fact.

Also in case you didn’t know this, since you have a nice model with a conjugate prior you could just compute the parameters of the posterior distribution directly instead of using MCMC via PyMC.

Hope this is useful!

2 Likes