I am following the Survival Analysis tutorial (with some of my own made up data)
coords = {"intervals": intervals}
with pm.Model(coords=coords) as model:
lambda0 = pm.Gamma("lambda0", 0.1, 0.1, dims="intervals")
beta = pm.Normal("beta", 0, sigma=0.001)
lambda_ = pm.Deterministic("lambda_", T.outer(T.exp(beta * np.log(traindf.days_since_last_order+0.01)), lambda0))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
However I get the error:
Logp initial evaluation results:
{'lambda0': -18.2, 'beta': -59886.4, 'obs': -inf}
I’m not sure why would a Gamma prior (my lambda0
) obtain a negative value, if a Gamma distribution is strictly non-negative?
I’m probably missing something obvious…
Thanks for the help in advance.
As the message says, this is reporting the log probability (logp
) of the initial value for beta under your specified prior
My guess is that np.log(traindf.days_since_last_order+0.01))
is allowed to be negative (indeed you add 0.01
, so i guess there are zeros and log(0.01) < 0
. So mu
is negative, and the Poisson is undefined.
You should use pm.math.exp(mu)
instead of just mu
in the Possion. Reading a bit more carefully, I see you exponentiation inside the outer product, but it’s probably safer to work entirely in the latent space until the last moment
Not sure how I missed that…as I said, I’m missing something obvious, thanks Maybe it is because I expected Pymc to return the initial values that caused the problem, not the log-probabilities, but I should have read the message more carefully.
I don’t think mu
is negative because I take the exponent of beta * np.log(traindf.days_since_last_order+0.01)
and then the outer product with lambda0
which is also positive?
I hear you though about maybe keeping the transformations to the end, and staying in the latent space as far as possible.
You can try model.debug()
to help track down why obs is getting impossible values. It could also be a problem in death
So it seems that initial parameters evaluate to a mu
that is pretty much zero, and hence the error. I will play with better priors with prior predictive distribution, but to test the theory I simply added a small constant to mu
and it fit perfectly.
And yes, model.debug(verbose=True)
came in handy.
Thanks for the help!
2 Likes