To get the log likelihood, you can use a densitydist instead of a potential (making sure all the inputs of
observed
are actual observed data, see Using a random variable as observed - #5 by OriolAbril, I think you’ll need a lambda on likelihood so the function passed to densitydist doesn’t have params as input).
If I understand your suggestion correctly, I’ve replaced the like=pm.Potential… line with:
def likelihood_wrap(params):
return lambda stateActions1, stateActions2, rewards: likelihood(params, stateActions1, stateActions2, rewards)
likelihood_wrap_ = likelihood_wrap(params)
like = pm.DensityDist("like", likelihood_wrap_(stateActions1, stateActions2, rewards),
observed={"stateActions1": stateActions1, "stateActions2": stateActions2, "rewards": rewards})
But that returns the error:
TypeError: ‘TensorVariable’ object is not callable
I’ve also played around with using the theano @as_op wrapper to try to avoid that error. Can you be a little more specific about how I should call DensityDist in my case? Thank you!
The second model doesn’t seem equivalent to the initial one. Given the first one works correctly, it means that
likelihood
returns the log likelihood which can’t be used as a probability as is , so using it asp
in the Multinomial ends up with nans or infs. You could exponentate it to make sure the probabilities are in the right 0-1 range (assuming the normalization factor is present) but even then, the model is different than the original one.
I forgot to mention that I do exponentiate the output of the likelihood function when using pm.Multinomial, however it sounds like using DensityDist is the way to go.