Hitting a weird error to do with RNGs in Scan in a custom function inside a Potential

That’s a nice idea, and very similar to something I / we tried earlier (Something changed in `pytensor > 2.12.3` (and thus `pymc > 5.6.1`) that makes my `pytensor.gradient.grad` call get stuck - any ideas? - #20 by jonsedar), but it gives a DisconnectedInput error, sadly

def get_log_jcd_scan(f_inv_x: pt.TensorVariable, x: pt.TensorVariable) -> pt.TensorVariable:
    """Calc log of determinant of Jacobian where f_inv_x and x are both (n, k) 
    dimensional tensors, and `f_inv_x` is a direct, element-wise transformation 
    of `x` without cross-terms (diagonals).
    Add this Jacobian adjustment to models where observed is a transformation, 
    to handle change in coords / volume. Initially dev for a model where k = 2.
    Use explicit scan
    """
    n = f_inv_x.shape[0]
    k = f_inv_x.shape[1]
    grads, _ = pytensor.scan(lambda c, w: tg.grad(cost=c, wrt=w), 
                    sequences=[f_inv_x.ravel(), x.ravel()])
    log_jcd = pt.sum(pt.log(pt.abs(grads.reshape(n, k))), axis=1)
    return log_jcd

Before now I’ve found that any slicing of the cost c in e.g. tg.grad(cost=c, wrt=w) generally leads to DisconnectedInputErrors.

FWIW I think explicitly creating the scan over tg.grad feels slightly more explicit than relying on tg.jacobian, and maybe this gives me the opportunity to declare the updates, but I’m still not sure. Several things going on here.

By any chance have you been able to replicate locally?


The full (updated) script again if you wanted to test it locally

REMOVED EVEN MORE OLD BROKEN CODE! :D