Got different results after each time re-run the model

Hi,
I am totally new to Bayesian stat and PyMC. I am try to reproduce code from book “Bayesian Analysis with Python” from O. Martin, the flipping coins example. I use VS Code notebook as enviroment. The code:

np.random.seed(123)
trials = 4
theta_real = 0.35 # unknown value in a real experiment
data = pz.Binomial(n=1, p=theta_real).rvs(trials)

with pm.Model() as our_first_model:
  θ = pm.Beta('θ', alpha=1., beta=1.)
  y = pm.Bernoulli('y', p=θ, observed=data)
  idata = pm.sample(1000, chains=4)
az.plot_trace(idata, combined=True)

The mean of theta should be around theta_real = 0.35 (in the book it showed 0.34).
I runned the script several times, each time I get different means as below images

In practice how to know which one is correct, and does it normall to get different results like this!
Thank you very much

The answer is that all of them are correct. Actually this model has a nice closed form (it’s called a conjugate model), which means its possible to work out an analytical solution. This solution does converge to a single solution for a large number of observations, but since you are only looking at 4 flips, there are five possible posteriors you can end up with, one of each of the number of tails you observe (0, 1, 2, 3, or 4). Here are the five possible results:

If you’re new to Bayes and stats, it’s a nice exercise to work out what these four distributions are, and confirm the analytic results against what MCMC gives you.

Bonus: Convince yourself that it’s impossible for the order in which you observe the flips to matter to your belief about the coin’s probability of landing tails up (e.g. seeing HTHH and HHTH are identical evidence). This is an extremely important property!

1 Like

wow, I got the point now. Really thank you for detailed explain