Get nan or inf from model.logp (model.test point) is an attestation of incorrectly configured model?

A few thoughts:

1, In your model, the observed part you are doing:

with model:
    ...
    mz = pm.Uniform("mz", lower=0, upper=1, observed=mz_freq)
    M = pm.Binomial("M", n=Yp, p=mz, observed=loss_shifte

However, the node mz is not doing anything, since there is no prior on mz, in effect, this is equivalent to mz = mz_freq

2, Since we got -inf from model.logp (model.logp(model.test_point)), we want to identify which RV node is causing the problem. @aseyboldt suggest doing model.bijection.rmap(model.dlogp_array(model.dict_to_array(model.test_point))) to get the logp of each RV but actually I am not sure that’s correct. I am doing below instead:

# for RV in model.basic_RVs:
#    print(RV.name, RV.logp(model.test_point))
model.check_test_point()

which gives:

lower_beta_interval__ -1.3862943611198906
upper_beta_interval__ -1.3862943611198906
beta_interval__ -13.862943611198906
beta_gamma_interval__ -1.3862943611198906
alpha_gamma_interval__ -1.3862943611198906
gamma_log__ 0.14556084518264534
I_interval__ -13.862943611198906
S_interval__ -13.862943611198908
po_interval__ -1.3862943611198906
Y -18.29351833496805
Yp -3.574558647108001
Yo -26.885429018279982
M -inf

So the problem is the node M = pm.Binomial("M", n=Yp, p=mz_freq, observed=loss_shifted40), and looking closer at the test_value of the input parameter:

In[1]: Yp.tag.test_value
Out[1]: array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

The n of the Binomial is smaller than the observed, which causing the -inf in the log-likelihood. An easy temporally fix is to provide a valid test value to Yp:

Yp = pm.Poisson("Yp", mu=mu_Yp, shape=len(fem_cases), testval=loss_shifted40+1000)

which seems to resolve the problem.

So below is the full code I ran, hope you find it helpful.

import pymc3 as pm
import numpy as np

loss_shifted40 = np.array([155, 267, 105, 223, 183, 189, 262, 178, 272, 187])
fem_cases = np.array([2, 2, 2, 3, 1, 0, 1, 0, 0, 0])
mz_freq = np.array([0.05, 0.047, 0.045, 0.063, 0.06, 0.062, 0.053, 0.057, 0.061, 0.058])

pp_mean = 0.0595759774518
N = 3392425
with pm.Model() as model:

    pp = pp_mean
    lower_beta = pm.Uniform("lower_beta", lower=0, upper=1)
    upper_beta = pm.Uniform("upper_beta", lower=lower_beta, upper=lower_beta+3)
    beta = pm.Uniform("beta", lower=lower_beta, upper=upper_beta, shape=len(fem_cases))

    beta_gamma = pm.Uniform("beta_gamma", lower=0, upper=120)
    alpha_gamma = pm.Uniform("alpha_gamma", lower=(2/3)*beta_gamma/7, upper=(4/3)*beta_gamma/7)
    gamma = pm.Gamma("gamma", alpha=alpha_gamma, beta=beta_gamma)
    
    I = pm.Uniform("I", lower=0, upper=fem_cases.sum(), shape=len(fem_cases))
    R = pm.Deterministic("R", (gamma*I))
    S = pm.Uniform("S", lower=0, upper=N-(I+R), shape=len(fem_cases))

    inc = pm.Deterministic("inc", beta*I*(S)/(S+I+R)-R)
    
    po = pm.Uniform("po", lower=0, upper=1)
 
    Y = pm.Poisson('Y', mu=inc, shape=len(fem_cases))

    mu_Yp = pm.Deterministic("mu_Yp", var=Y*pp)
    
    Yp = pm.Poisson("Yp", mu=mu_Yp, shape=len(fem_cases), testval=loss_shifted40+1000)

    Yo = pm.Binomial('Yo', n=Y, p=po, observed=fem_cases)    

    M = pm.Binomial("M", n=Yp, p=mz_freq, observed=loss_shifted40)
    
    trace = pm.sample(1e4)
2 Likes