I am trying to use PyMC for a Bayesian Inversion Problem. I would need to start with a prior, get some samples using MCMC, then change my forward function and resume sampling, i.e., I do not want to start again with the original prior. Is there a way to do it?
For the moment, it might be enough to use the standard Metropolis-Hasting sampler. My intuition is that, since we are working with Markov Chains, it should be enough to recover the last state of the previous trace and resume sampling from that. However, I noticed that using
pm.sample with the argument
trace=previous_trace does not work anymore, while in PyMC3 it was.
Do I achieve what I am willing to if I initialize a sampler once at the very beginning, say
sampler = pm.Metropolis() or whatever sampler, and then use
pm.sample(step=sampler)? I did not understand if this is indeed a solution or not: it seems to produce some meaningful results, but still I did not understand completely how it works.
I also tried to follow the suggestions in this post and the linked ones, but I did not figure it out. Does anybody know if there exists a solution to that? A hard-coded one would be enough for the moment.
Would something like converting the previous posterior to a prior for your new model, as in this example, work? It seems like you want to iterate on the model without throwing away good information from the previous runs, which this scheme would accomplish. I’m not an expert on MCMC theory, but I think changing the model then resuming sampling from a past chain would bias the result (my intuition: the first half of the chain would be from the original model, while the other half would be from the new model, and thus the whole chain would represent neither).
If all you care about is the starting points, you can provide a dictionary of initial values to
pm.sample via the
initvals argument. As long as all the variable names stayed the same between models, it would be easy enough to write a function that takes in an
idata, pops off the last samples for each variable, and returns a dictionary.
Hi, thank you for your answer. The example you pointed me to is not appropriate for my use case because I would like to avoid drawing samples from the posterior and using them as prior for the new model. Moreover, I have good reasons to change the model, and for my application, it will not bias the results.
I would like to do something like:
trace_1 = pm.sample(1000)
#some modifications to model_1
trace_2 = pm.sample(1000, tune=0)
and then gather
trace_2 in a single
trace, in a way that if I didn’t do anything to the model, the code above would be equivalent to
trace = pm.sample(2000)
Thanks for your help!
If you want the results joined, could you just use arviz to merge the inference data outputs?
I don’t want the results to be joined, I would like to stop and resume sampling without starting again from my initial uninformative prior.