I am relatively new in this pymc3 Bayesian space. I’ve been spending many many hours cracking what pm.Potential() actually do.
I have tried this code, from my understanding so far, the posterior plot should be very very similar, but apparently not
N = 1000 prob = 0.1 y_obs = np.random.binomial(n = N, p = prob) with pm.Model() as model_exp2: # model 1, the usual way # prior beta_p_M1 = pm.Beta('beta_p_M1', alpha=1, beta=1) # model 1, the regular way M1 = pm.Binomial('M1', p = beta_p_M1, n = N, observed = y_obs) # model 2, with pm.potential # prior beta_p_M2 = pm.Beta('beta_p_M2', alpha=1, beta=1) # model 2, using pm.Potential M2 = pm.Potential("M2 ", pm.Binomial.dist(n =1000, p = 0.1).logp(beta_p_M2)) trace = pm.sample(3000, chains=2, njobs = 2) burned_trace = trace[500:] plot_posterior(burned_trace)
Explanation on what I do:
on model 1,
the simplest parameter estimation with uninformative prior (beta 1,1)
on model 2,
from what my understanding, the most common syntax for pm.potential is pm.Potential(‘Any Name’, likelihood_function).
My likelihood function defined as pm.Binomial.dist(n=1000, p = 0.1).logp(beta_p_M2)
keep in mind that pm.Binomial.dist(n=1000, p = 0.1) is the same exact with variable y_obs
pm.Binomial.dist(n=1000, p = 0.1).logp(beta_p_M2) means at that binomial distribution, evaluate it’s probability at value = beta_p_M2(<< which I believe is the prior). And finally take the log of that probability.
in my head, M1 and M2 should give me very identical results, but it doesn’t.
Could someone enlight me?