How to improve the efficiency of loop, and find a root

I have an implementation of Newton’s Method in Aesara here that might be helpful. I’ve tried including it in PyMC models before, but it’s very slow. One thing I’ve been thinking about recently is that it might be wasteful to have every step of the root finding algorithm on the computational graph. If you can write down derivatives of the root with respect to parameters, you could just wrap scipy.optimize.minimize in an Op and provide your own gradients. Not a general solution, but would likely be much much faster than a scan-based optimizer like the one I linked.

In general though, Scans are a part of the library under very active development, and for now it is what it is. I’ve found it can be quite fast with scalar-valued inputs, but goes very slow on linear algebra operators. There are ways to optimize them, though. There’s a thread here were the Aesara devs walk me though optimization of a scan, it might be of interest.

You can also try using nutpie, which is a Numba-based NUTS sampler. It can offer significant speedups in certain cases, and can compile a PyMC model without any work on your end.

Finally, I second the (somewhat heterodox) opinion of @BioGoertz in the thread that @ckrapu just linked, that sometimes you should think about letting go of NUTS if you model is really too hard to get gradients for. I recently have had success with using emcee for a modest-dimensional model with a ton of loops, optimizations, and linear-approximation.

1 Like