Would something like converting the previous posterior to a prior for your new model, as in this example, work? It seems like you want to iterate on the model without throwing away good information from the previous runs, which this scheme would accomplish. I’m not an expert on MCMC theory, but I think changing the model then resuming sampling from a past chain would bias the result (my intuition: the first half of the chain would be from the original model, while the other half would be from the new model, and thus the whole chain would represent neither).
If all you care about is the starting points, you can provide a dictionary of initial values to pm.sample via the initvals argument. As long as all the variable names stayed the same between models, it would be easy enough to write a function that takes in an idata, pops off the last samples for each variable, and returns a dictionary.