Hi @junpenglao
Thank you so much for your quick reply.
If I understand correctly based on your statement:
pm.Binomial.dist(p = beta_p_M1, n = N).logp(y_obs))
pm.Binomial.dist(p = beta_p_M1, n = N) representing the prior
logp(y_obs) representing the observed
combining those together with pm.Potential() will construct the likelihood function
Although I have confirmed your suggestion just now by running it on my iPython notebook, I am still confused.
So I have been studying from this book from CamDavidsonPilon:
where he showed a case study on gameshow “The Price is Right”.
In his code, he described:
- Historical Price as ‘prior’ belief, Normal(35000, 7500)
- Observed/guessing Price (SnowBlower Normal(3000, 500) + Toronto Normal (12000, 3000)) as the ‘observed’ value
Based on your explanation, the logp() should contains the ‘observed’ value. But CamDavidsonPilon put the ‘prior’ into the logp() function
Below is the screenshot of his model,
Any idea about this?
Thank you very much!