Advice for Time Series Forcasting

These are the issues currently open that would give performance gains. 406 is the most important since it will allow us to actually start benchmarking performance across large panels of time series. 394 would allow for better performance on very long time series, and 332 promises to be an across-the-board speedup. These issues in pytensor are also relevant:

1100 would unlock more rewrites for matrix multiplication, but it isn’t necessarily an optimization itself.

There is also room for gains by working with square-root filters. This is especially nice since we sample covariance priors in Cholesky form anyway, so we’d actually never need to instantiate covariance matrix. I already wrote the filters, but actually using them is blocked by pytensor issue 1099, so we can’t compute their gradients.

Aside from all there, there’s also Chandrasekhar recursions instead of Kalman filtering in cases where it is permitted, which I think is the majority of interesting cases.

Finally, I think getting things over to numba is promising as well. There are some issues with the gradients currently being generated that prevents that, but I think it’s a long-term better solution than JAX, which is much more difficult to extend to specialized Ops that could be used to speed up computation, like solve_discrete_are and solve_discrete_lyapunov. I made an issue on the jax repo about these Ops but it was not warmly received. On the other hand, we already have numba dispatches for them.

4 Likes