Truncation errors (falling back to rejection sampling despite logcdf present)

Sure, I will organize the example into an issue later :slight_smile: .
Just tried max_n_steps=100,000 instead of 10,000 and it did finish without raising the TruncationError, though much much slower as expected.
I am still trying to determine whether it is because my model is illy built or the low convergence is just what it is given how extreme the parameters are in my case (extreme but agreeing with real data): non-zero percentage (psi) is low(<5%) and probability for non-zeros is somewhat also low (NegativeBinomial(n=4000, p=0.9999x)).
Real data of size 200 looks something like [190 zeros + 51 + 42 + 1*3], so to model the non-zeros as accurate as possible, I chose Hurdle model (zero-inflated models can model the zero part ok but the non-zero parts not as accurate as hurdle models).
Here is how the model looks like:

    with pm.Model() as m:
        m.add_coord('sample', ...)
        m.add_coord('k4', ...)
        TD_obs = pm.MutableData('TD_obs', ..., 'k4'))
        AD_obs = pm.MutableData('AD_obs', ..., dims=('sample', 'k4'))
        zp_obs = pm.MutableData('zp_obs', ..., dims='k4')

        mu_k4 = pm.TruncatedNormal('mu_k4', mu=6, sigma=3, lower=2, upper=12, dims='k4')
        AD_predicted = pm.HurdleNegativeBinomial(
            'AD_predicted',
            psi=1 - zp_obs,
            n=TD_obs,
            p=1 - 1e-5 * mu_k4,
            observed=AD_obs)
        data = pm.sampling.jax.sample_numpyro_nuts(**inferenceParams)

I’m reading into the codes to learn how “convergence” is determined in such Truncated functions.