Reusing prior can be tricky, because PyMC needs priors with closed form, and the posteriors we get are histograms.
There are some examples of reusing posteriors as priors by wrapping the draws in a kernel estimator. This example uses sequential learning, but the same principle would apply to reusing prior on different datasets: Updating priors — PyMC example gallery
You could also just take the mean estimate from the large dataset, and use it directly, or if you need uncertainty put some normal around it (or just use the closest normal fit to the dataset). This is the gist of what this fancier utility does: histogram_approximation — pymc_experimental 0.0.13 documentation
It’s also worth always keeping the boring option in mind: fit everything together. Complex effects shouldn’t be an issue for the smaller subsets if you model it in a way that allows for hierarchical regularization. Sometimes the speed gain from using posterior approximations as priors doesn’t overcome the precision losses. It’s definitely worth exploring (the downside being this is all very exploratory, even research wise)