Why does my model show logp of zero?

My simple model is exhibiting a logp of zero. It seems impossible that there is zero likelihood of the observed data.

The model has two observed variables — one a normal and the other a binomial. There are five other random variables.

naive_perceived = 0.4
opportunities = 61
true_pursued = 45
true_won = 7

with pm.Model() as model:
    winnable_p = pm.Uniform('winnable_p', 0, 1) 
    skill = pm.Beta('skill', 2.0, 5.0)
    winnable = pm.Binomial('winnable', opportunities, winnable_p) 
    winnable_pursuit_p = pm.Deterministic(
        'winnable_pursuit_p', 
        naive_perceived * (1 - skill) + skill)
    unwinnable_pursuit_p = pm.Deterministic(
        'unwinnable_pursuit_p',
        naive_perceived * (1 - skill))
    winnable_pursued = pm.Binomial('winnable_pursued', winnable, winnable_pursuit_p)
    unwinnable_pursued = pm.Binomial('unwinnable_pursued', opportunities - winnable, unwinnable_pursuit_p)
    pursued = pm.Normal('pursued', winnable_pursued + unwinnable_pursued, sd=1, observed=true_pursued) 
    won = pm.Binomial('won', winnable_pursued, skill, observed=true_won) 
    trace = pm.sample(25000, step=pm.NUTS())

Diagram, with double-line border used to tag the two deterministic variables.

The model logp is zero throughout:

What am I doing wrong?

(Yes, this is the same question as this one, a question that attracted no responses — although @junpenglao kindly corrected its category. This version shows a far simpler model that exhibits the same problem, and a diagram. Arguably my earlier question was too taxing, asking the wise pymc3 community to first understand what I was trying to do before advising me on what I was doing wrong. Sadly once I realized my mistake, the duration for editing the original question had expired.)

Seems the sampler is stuck at the initial position and does not move, usually, model with discrete free parameter is pretty difficult to inference (in this case, Binomial nodes with no observed).
There is no obvious solution I can see right away - maybe approximate it with a Beta distribution?

1 Like

Actually, running the model the trace look ok - the problem here that trace['model_logp'] is all zeros is a bug (https://github.com/pymc-devs/pymc3/issues/3188) that have not been fixed yet.

For now you can do:

logp_fn = model.logp

model_logp = np.zeros((25000, trace.nchains))
for i in trace.chains:
  for s in range(25000):
    model_logp[s, i] = logp_fn(trace.point(s, i))

plt.plot(model_logp, alpha=.2);
2 Likes

That explains it. From the referenced issue, it looks like model_logp is only implemented in two samplers (NUTS and HMC) so far. Maybe not even a bug, just a feature that has not yet been fully implemented.

1 Like