Hi all
I’m working through Lee and Wagenmakers’ book “Bayesian Cognitive Modelling” and I was porting their example where the binomial was conditional on both the n number of tests and p the rate of success, where both parameters were unknown. As a result an uninformitive prior was set for n using a Categorical distribution in WinBugs. However I ran into “Bad energy errors” when trying to use this in pymc3. For example:
with pm.Model() as model:
kdata =[ 16, 18, 22, 25,27]
nprior = np.ones(500)/500.
n = pm.Categorical('n', p=nprior)
theta = pm.Beta('theta', alpha=1., beta=1.)
k= pm.Binomial('k', p=theta, n=n, observed=kdata)
trace = pm.sample(2000, tune=2000)
Now the bad energy is a non-starter but I don’t understand why. Luckily @Jupeng Lao already worked through that book and had got through the issue on line 15 of his notebooks, by replacing the pm.Categorical distribution with a discrete uniform one which works well enough.
My question is why does the pm.Categorical distribution give a bad-energy warning, whereas the pm.DiscreteUniform does not. They both seem to be equivalent.
cheers
Peter