I’ve been seeing some odd behavior with a hierarchical model that I’m trying with sample_prior_predictive
: some of the variance parameters, which I was modeling with Inverse Gamma(1,2) distributions, give what look like very odd samples. Here’s an arviz plot of the results of sample_prior_predictive for 4 such variables:
Here’s the code I used to make this:
import pymc3 as pm
import arviz as az
import matplotlib.pyplot as plt
m = pm.Model()
with m:
pm.InverseGamma('Inverse Gamma(1,2)',alpha=1, beta=2, shape=4)
samples = pm.sampling.sample_prior_predictive(model=m)
data=az.from_pymc3(prior=samples)
az.plot_density(data,var_names=pm.sampling.get_default_varnames(m.named_vars, False), group='prior')
plt.show()
Actually, TBQH, I am not sure that this is a bug in sample_prior_predictive
or if it’s a bug in arviz
. These plots look kooky.
Yes, sorry, my bad – arviz
made a very distorted plot that hid the exponential drop-off.
Density plot, by default shows the values in the 94% hpd interval, your distributions have a very long tail with very low density, if you change from the 0.94 default to something like credible_interval=0.999
, you should see something closer to what you are expecting
Yes, that’s true. But in my case, the behavior at the left end – in the range 0 ≤ x ≤ 10 – is what’s really important, and there is a drop-off there. But arviz plotting this as a line plot obscures what’s really going on. A histogram plot captures the behavior in this case much better, because having only the default number of samples (500) is not nearly enough to approximate the curve well enough to plot it as a line… If I had 10 times the samples, it would be a better visualization…
I see, I may have a solution, but I have to play with the code a little bit.
EDIT: This should fix the problem.