Simple model question

Hi!
I agree that in the above results, the posterior of both P and N seem to zero in on the true parameter value, but in my experience that is not always the case for this model.

I took the time to rerun @maab’s example, with n_experiments = 10000.
In this ecample, there posteriors have zeroed in on the wrong values (see code and plot below). Each sampling chain also zeros in on different values with confidence (small posterior std).
I am questioning if one can actually estimate both N and P with confidence when the observed values are number of heads?

N=12
P = 0.5
n_experiments=10000
obs=np.random.binomial(N,P,n_experiments) #  observed number of heads with random number of flips
                                   
with pm.Model() as m:
    n = pm.DiscreteUniform('n', 1, 50)                  # unknown number of coin flips
    p = pm.Uniform('p', 0., 1., transform=None)         # P(A) probability of head before evidence (prior)
    y = pm.Binomial('k', n, p, observed=[obs])       # outcome of total model 
    trace = pm.sample(10000,tune=1000,step=pm.Metropolis())
    pm.traceplot(trace)