I’m having hard time to fit a Gamma distribution… The weird thing is that It work nice when the observable samples are <= 10k, at 50k or more I get some convergence warning, that can be mitigated by increasing the number of tuning steps…
But at 200k samples it crashes all the time, with this error message :
ValueError: Mass matrix contains zeros on the diagonal.
The derivative of RV
alpha_log__.ravel() is zero.
This is my toy dataset (my ‘observables’) :
data = np.random.gamma(shape=1.6,scale=1.0/0.00169,size=10000)
This 10k dataset histogram looks like this :
This is my model :
with pm.Model() as model: alpha = pm.Exponential('alpha',lam=1) beta = pm.Exponential('beta',lam=1) obs = pm.Gamma('obs',alpha=alpha, beta=beta, observed=data) trace = pm.sample(draws=1000, tune=1000,chains=1)
It samples fast : (>500 it/s)
… And nice :
The hidden parameters are correctly estimated. I’m happy …
But now simply adding more observables data, from 10k to 200k for instance make the model crash !!
My toy dataset is then :
data = np.random.gamma(shape=1.6,scale=1.0/0.00169,size=200000)
The model starts to sample ok, but suddenly, at around 30%, it stalls, and then crashes with this error message.
ValueError: Mass matrix contains zeros on the diagonal. The derivative of RV `alpha_log__`.ravel() is zero.
I don’t understand this behavior, a model is either good or bad but it should not depend on the number of observables? Intuitively I would even think that the more observable I have, the better it is, but in this case this intuition seems wrong…
Any help will be appreciated.