I am currently working on a project to where the chains are some times converging and sometimes not. I would like to reproduce these results. But if I set a random seed in the sample pymc3 method it does not change anything.
My method call look like that
trace = pm.sample(model=this_model, tune=500, draws=1000, init="advi+adapt_diag", target_accept = 0.99, random_seed =100). Since the full model definition is a little long I tired to create a minimal example to reproduce this behavior:
with pm.Model() as model: a = pm.Normal('a', 0, 1) b = pm.Normal('b', 0, 1) c = pm.Deterministic('c', a*b) np.random.seed(42) trace_a = pm.sample(model=model, tune=500, draws=1000, init="advi+adapt_diag", target_accept = 0.99, random_seed =42) np.random.seed(42) trace_b = pm.sample(model=model, tune=500, draws=1000, init="advi+adapt_diag", target_accept = 0.99, random_seed =42) print(trace_a['c'] == trace_b['c']).all()
The last line indeed returns True but if you look at the progress bars you see that the losses in the init step as well as the step needed are not the same:
I am using pymc3 3.11.2 and Theano-PyMC 1.1.2 and python 3.8.8.
Any Ideas why this is happening? Or if this is relevant for reproducibility of the actual sampling part?
Thanks a lot