This is a really tricky one to narrow down, and I’ve tried to simplify as far as I can. I think this needs a bit of insight from the pytensor devs to help me please try to figure out where to look further for what’s changed recently.
I’ve written a copula model (loosely based on, and improving upon a working notebook by @junpenglao) that relies on transforming observed data. This evidences Potentials on the marginals and the copula, and critically also applies a Jacobian adjustment to the copula.
I’ve simplified that model a little here, but can’t reproduce the issue when I get even more simple, so here we are.
Everything works great in
pymc 5.6.1, pytensor 2.12.3, but in
pymc 5.9.0, pytensor 2.17.2 there seems to be a hidden (to me) issue with computing the gradient. This appears only when I include the my function to compute the Jacobian adjustment:
def log_jcd(f_inv_x: pt.TensorVariable, x: pt.TensorVariable) -> pt.TensorVariable: """Calc log of Jacobian determinant""" jcd = tg.grad(cost=pt.sum(f_inv_x), wrt=[x]) return pt.log(pt.abs(jcd))
I’ve written a few related scenarios to (hopefully) illustrate the issue:
Scenario 1 (Old World): Use
pymc 5.6.1, pytensor 2.12.3, DO apply Jacobian adjustment - everything runs great as expected
see gist 994 here
Scenario 2 (New World): Use
pymc 5.9.0, pytensor 2.17.2, do NOT apply Jacobian adjustment - everything runs as expected, although obviously wider variance due to not using the Jacobian adjustment. Just want to prove that everything else runs here.
see gist 993x here
Scenario 3 (New World): Use
pymc 5.9.0, pytensor 2.17.2, DO apply Jacobian adjustment - everything DOES NOT run as expected
see gist 994x here
Here in Scenario 3, two calls fail which I hope are illustrative for the pytensor devs to help me figure out:
The call to
pm.init_nutshangs forever, seemingly disappearing into an infinite loop. In the old world this call completes within seconds, but in the new world 've left it for over 50 minutes and still had no completion. When I interrupt the process I see this stacktrace init_nuts
I’ve also tried to make a call to
pm.find_MAPin the hope that this might be simpler and at least let me make a start. However, this also hangs in a never ending loop. When I interrupt that process I see this stack trace find_map
I’m at a bit of a loss where to investigate further. I dont think it’s famous shape issue, and I should add that I dont see this issue in a simpler model discussed which also uses the new world
pymc 5.9.0, pytensor 2.17.2 and DOES apply a Jacobian adjustment, BUT that adjustment is NOT applied to an observedRV, but rather IS applied to a FreeRV.
The combination of applying a grad to observed data in
pymc 5.9.0, pytensor 2.17.2 seems to cause the issue…
I’d greatly appreciate any and all ideas to help me debug this further!