Categorical vs Discrete Uniform

Hi all
I’m working through Lee and Wagenmakers’ book “Bayesian Cognitive Modelling” and I was porting their example where the binomial was conditional on both the n number of tests and p the rate of success, where both parameters were unknown. As a result an uninformitive prior was set for n using a Categorical distribution in WinBugs. However I ran into “Bad energy errors” when trying to use this in pymc3. For example:

with pm.Model() as model:
    kdata =[ 16, 18, 22, 25,27]
    nprior = np.ones(500)/500.
    n = pm.Categorical('n', p=nprior)
    theta = pm.Beta('theta', alpha=1., beta=1.)
    k= pm.Binomial('k', p=theta, n=n, observed=kdata)
    
    trace = pm.sample(2000, tune=2000)

Now the bad energy is a non-starter but I don’t understand why. Luckily @Jupeng Lao already worked through that book and had got through the issue on line 15 of his notebooks, by replacing the pm.Categorical distribution with a discrete uniform one which works well enough.

My question is why does the pm.Categorical distribution give a bad-energy warning, whereas the pm.DiscreteUniform does not. They both seem to be equivalent.

cheers
Peter

The reason is that

  • pm.Categorical returns an integer between 0 to length of p, and 0 will cause logp being undefined for Binomial.
  • poor model parameterization in general as n could be smaller than kdata

You can try k= pm.Binomial('k', p=theta, n=n+max(kdata), observed=kdata)

Ah, so the Categorical cannot be adjusted to work, as it will have a non-zero probability of having a value of 0, which is not a possible value for n. Or will will have a zero probability of having a value of 0, which would be -inf logp? Causing the warning?