Sample prior predictive relies on the distribution’s random method only. This means that it ignores potentials, like what you defined, because they only affect the model’s logp. To draw samples that are distributed correctly according to your model’s potentials, you have to use a different algorithm, like MCMC, rejection sampling of SMC.
Luckily, pymc3 can help you with that. What you have to do is:
- Write your model but don’t assign observations to any variable.
- Call
pm.sampleto generate samples using MCMC (NUTS would be used by default in your short example) or SMC (which could also work as a form of rejection sampling).
As with any use of MCMC, you will have to evaluate the convergence of the chains. Check the effective sample size, the rhat, the dispersion, the divergences, the energy landscape (these two will only apply to HMC methods like NUTS), and any other check that suits your needs.
What would be really ideal would be if you could re write your potentials to introduce them into a custom distribution, so that you also modify the generated samples. I know that this can’t be done for many relevant models, like conditional random fields, so don’t worry much about it.