To split up model problems from data problems, and as a check of parameter identification, it is very good practice to use prior draws from your model as data. You can do this by removing the observed
argument from latent
and use pm.observe
:
with pm.Model() as second_model:
mu = pm.Normal('mu', mu=0,sigma=1)
alpha_raw = pm.Normal("a0", mu=0, sigma=0.1)
alpha = pm.Deterministic("alpha", pm.math.exp(alpha_raw))
beta = pm.Deterministic("beta", pm.math.exp(mu / alpha))
latent = pm.Weibull('latent',alpha=alpha, beta=beta, shape=(100,))
prior = pm.sample_prior_predictive()
sample_data = prior.prior.isel(chain=0, draw=123)
with pm.observe(second_model, {'latent': sample_data.latent}) as second_obs:
idata = pm.sample(init='jitter+adapt_diag_grad')
idata = pm.sample_posterior_predictive(idata, extend_inferencedata=True)
This samples without issue, and produces the following results:
axes = az.plot_trace(idata, var_names=[x.name for x in second_obs.free_RVs])
for axis in axes[:, 0]:
param = axis.get_title()
axis.axvline(sample_data[param].item(), ls='--', c='k')
plt.tight_layout()
So on artificial data, where you know the correct answer and the proposed model is really the right model, everything works fine. This points to you data as having issues. I would start by checking for missing or NaN values.
Edit: Also there is no reason to switch pytensor to half precision, so I recommend not doing that.