Right, thank you, now I understood.
However, wouldn’t that be the case for any binomial GLM even without censoring? Because, without censoring and no clipping in the data, I can fit a Binomial GLM without problems.
In any case, I simplified the model to directly feed the observations into the binomial distribution, like this:
with pm.Model() as censored_binomial_glm:
a = pm.Normal('a', 0.0, 10.0)
b = pm.Normal('b', 0.0, 1.0)
p = 0.5 * pm.math.erfc(a*x + b)
binomial_dist = pm.Binomial.dist(n=customers, p=p)
pm.Censored("y", binomial_dist, lower=0, upper=capacity, observed=y)
This doesn’t solve my problem with fitting the model, however. I still get a lot of divergences:
with censored_binomial_glm:
trace = pm.sample(tune=2000, target_accept=0.999)
Leads to this:

