Geometric and negative binomial

It is know that geomtric distribution is a special case of negative binomial. I’m trying to get the same result with 2 models:

with pm.Model(coords=coords) as model:
            alpha = pm.HalfFlat("alpha")
            beta = pm.HalfFlat("beta")
            theta = pm.Beta("theta", alpha, beta, dims=("id",))
            likelihood = pm.Geometric("y",theta, observed=y)

and

with pm.Model(coords=coords) as model:
            alpha = pm.HalfFlat("alpha")
            beta = pm.HalfFlat("beta")
            theta = pm.Beta("theta", alpha, beta, dims=("id",))
            likelihood = pm.NegativeBinomial("y",n=1, p=theta, observed = y)

and I’m failing. What’s the issue?!

Thanks & Regards.

There are different parametrizations where geometric may start at zero or one. Could that explain it?

What results showed you the two were not in agreement? Did the model actually converge?

1 Like

Same config, same data, I guess it is the parametrization. So, within pymc, we cannot get a geometric distribution from a NegativeBinomial?

You can, may need to shift the data by 1. Can you show the results you are basing your conclusions on?

1 Like

Just testing with 2 arrays of y (integers) and id (0,1,2,3,4,…,999). Nothing special. I still cannot get the same values for the parameters alpha, beta, theta. They differ a lot.

The likelihoods are equivalent if you shift by -1 as shown in this snippet

import pymc as pm

with pm.Model() as m:
   p = pm.Beta("p", 10, 100, size=3)
   pm.Geometric("x1", p=p, observed=[1,2,5])
   pm.NegativeBinomial("x2", n=1, p=p, observed=[0,1,4])

m.point_logps()
# {'p': 0.53, 'x1': -7.67, 'x2': -7.67}

If you are not getting the same results even after shifting it means the model is not converging. Given the flat priors and one beta per observation I wouldn’t be too surprised if this were the case.

1 Like