Lognormal constraint poor convergence

I guess so. While replying you, I found a reasonable workaround. I don’t really need to have that particular log-norm. I only wanted to target a given set of quantiles [0.17, 0.5 , 0.83] with associated values [ 30., 120., 340.]. The above-mentioned distribution is the one that best fits these quantiles but it don’t mind if there were a low-probability, thin tail further down the negative values. So in the end I used ifelse to join a Normal distribution for values below the median, to avoid the sharp low bound of the log-normal:

from pytensor.ifelse import ifelse

with pm.Model() as model:
    q = pm.Uniform("ais", lower=-200, upper=1000)
    pm.Potential("constraint", ifelse(
        pt.le(q, observed_values[1]),
        pm.logp(pm.Normal.dist(observed_values[1], observed_values[1]-observed_values[0]), q),
        pm.logp(pm.Lognormal.dist(mu=np.log(scale), sigma=s), q-loc),
       ))
    trace = pm.sample()

And below is the result. In red the original fitted log-normal, and dashed red the target quantiles vs blue dashed line the posterior quantiles. It’s satisfying enough to me (I guess I can also join further left and tune the tail to my taste).

Thanks for the stimulus.