Something changed in `pytensor > 2.12.3` (and thus `pymc > 5.6.1`) that makes my `pytensor.gradient.grad` call get stuck - any ideas?

You may be suffering from abstraction leaking!

PyMC models are defined in terms of RVs, and you want to work on the logp side.
If you write a Potential using a scan based on other model variables, that scan will first have RVs and then later we try and replace them by the respective value variables. That replacement could be failing (which would be a bug).

To make sure this is not the case, you can use the logp-value variables directly. You can get them from model.rvs_to_values[rv].

1 Like