Truncated Likelihood results in massive increase in sampling time for Prior & Posterior predictive

I’ve updated my model to use a truncated normal for the likelihood.

The model samples the posterior in the same time, but both prior & posterior predictive takes much longer (20s to 15min for posterior predictive).

How is that? I thought that truncating would increase speeds.

In my opinion, truncation would increase uncertainty (CrI are wider). This may increase the calculation times

Generating random draws from the Truncated Normal is probably slower than fr9m a vanilla Normal.

I think we are just using scipy trunc_norm. You can see if taking rvs eith similar parameters as in your model is indeed so mich slower.

Can you share a small reproducible snippet that illustrates the slowdown?

Hi there - I am experiencing the same slowness to the extreme and was wondering if someone had more insights since the last above post in Oct 2022. Truncated Normal (straight from pymc) likelihood was manageable (15min per sampling, I have 99 to do) but Truncated Student likelihood (own build from pymc, which could be the problem) gets stuck (see pic below on the sampling status).
Here is the model, with the 2 likelihood variants:
ndims = len(feature_cols)
X_shape = np.empty((0, ndims))
y_shape = np.empty((0,))
with pymc.Model() as MODEL:
# data containers
X = pymc.MutableData(“X”, X_shape)
y = pymc.MutableData(“y”, y_shape)

# priors
intercept = pymc.HalfNormal("intercept", sigma=1)
b = pymc.MvNormal("b", mu=np.zeros(ndims), cov=5 * np.eye(ndims))
nu = pymc.Gamma("nu", alpha=2.0, beta=0.1)
sigma = pymc.HalfCauchy("sigma", beta=10)
mu = intercept + pymc.math.dot(X, b).flatten()  

# likelihood Truncated Normal:
likelihood = pymc.TruncatedNormal("obs", mu=mu, sigma=sigma, lower=0.0, observed=y)

# likelihood Truncated TStudent:
likelihood = pymc.Truncated('obs', pymc.StudentT.dist(mu=mu, nu=nu, sigma=sigma), observed=y, lower=0.0)

Thank you!

TruncatedStudentT logp and gradients are much more costly to evaluate than for TruncatedNormal. You may also have bad priors for mu, nu, sigma, that force the sampler to take many small steps (more evaluations needed)

Thank you Ricardo. On priors, the above ones were meant to be weakly informative priors, but if you see an anomaly, please let me know.