A self contained example of this is:
import pymc as pm
with pm.Model() as m:
sigma = pm.Normal("sigma", 0.001, 1) # bad prior for sigma
y = pm.Normal("y", mu=0, sigma=sigma, observed=[-1, 0, 1])
with m:
try:
pm.sample_prior_predictive()
except Exception as exc:
print(f"{type(exc).__name__}: {exc}") # ValueError: scale < 0
idata = pm.sample()
pm.sample_posterior_predictive(idata) # Fine
Posterior predictive works because there are only positive draws of sigma, whereas in the unconstrained prior predictive, negative draws are suggested (roughly half the time)