My simple model is exhibiting a logp of zero. It seems impossible that there is zero likelihood of the observed data.
The model has two observed variables — one a normal and the other a binomial. There are five other random variables.
naive_perceived = 0.4
opportunities = 61
true_pursued = 45
true_won = 7
with pm.Model() as model:
winnable_p = pm.Uniform('winnable_p', 0, 1)
skill = pm.Beta('skill', 2.0, 5.0)
winnable = pm.Binomial('winnable', opportunities, winnable_p)
winnable_pursuit_p = pm.Deterministic(
'winnable_pursuit_p',
naive_perceived * (1 - skill) + skill)
unwinnable_pursuit_p = pm.Deterministic(
'unwinnable_pursuit_p',
naive_perceived * (1 - skill))
winnable_pursued = pm.Binomial('winnable_pursued', winnable, winnable_pursuit_p)
unwinnable_pursued = pm.Binomial('unwinnable_pursued', opportunities - winnable, unwinnable_pursuit_p)
pursued = pm.Normal('pursued', winnable_pursued + unwinnable_pursued, sd=1, observed=true_pursued)
won = pm.Binomial('won', winnable_pursued, skill, observed=true_won)
trace = pm.sample(25000, step=pm.NUTS())
Diagram, with double-line border used to tag the two deterministic variables.
The model logp is zero throughout:
What am I doing wrong?
(Yes, this is the same question as this one, a question that attracted no responses — although @junpenglao kindly corrected its category. This version shows a far simpler model that exhibits the same problem, and a diagram. Arguably my earlier question was too taxing, asking the wise pymc3 community to first understand what I was trying to do before advising me on what I was doing wrong. Sadly once I realized my mistake, the duration for editing the original question had expired.)