Instead of calling cdf 6 separate times, you can try to use
pt.vectorizeand index the batched vector afterwards.
Did you mean pytensor.graph.vectorize_graph? Could you refer me to a guide on how to use it or give me some tips? I tried (just blindly trying to copy the example in the docstring)
def _podium_logp(value, mu, sigma):
dist = pm.Gamma.dist(mu=mu, sigma=sigma)
shift = pt.scalar("shift")
density = exp(pm.logcdf(dist, value + shift))
new_shift = pt.vector("new_shift", dtype="int")
new_density = vectorize_graph(density, replace={shift: new_shift})
cdf = pytensor.function([new_shift], new_density)
densities = cdf([-2, -1, 0, 1, 2, 3])
density1 = densities[5] - densities[0]
density2 = densities[4] - densities[1]
density3 = densities[3] - densities[2]
return log(5 / 9 * density1 + 3 / 9 * density2 + 1 / 9 * density3)
but I got MissingInputError on the line where I call pt.function.
However the slowdown may not be in the model logp being heavy but in it being poorly identified. How do the traces look like after sampling is done?
Nothing alarming to me. No warnings about divergences either.
For comparison here’s a trace plot of the naive model.

