Adding a Custom AR process w/ FFBS support

Hi PyMC Community,

I am working on multivariate stochastic volatility models using pm.AR function.
However, given the complexity of my model, I noticed that the ESS is too low for the latent process due to the high-correlation in the AR process.

As such, I would like to implement a forward filtering and backward sampling to increase the sampling efficiency for the AR process. Currently, I have the formula and would like to integrate them into the latest pymc flow. I found a great post (that uses Theano package) and a github repo (pymc3-hmm) but they weren’t developed for the latest pymc.

My plan is to write a customized distribution derived from pm.Continuous (like the one did in ML_Finance_Codes/Chapter7-Probabilistic-Sequence-Modeling/ML_in_Finance-Kalman_Filters.ipynb at master · mfrdixon/ML_Finance_Codes (github.com). In the logp function I will perform forward filtering and backward sampling to obtain the corresponding logp.

In theory, this should be compatible with the NUTS sampler, I am wondering are there any suggestions or comments on what did I miss?

Much appreciated

1 Like

Hi!

The blog post you linked is really great, but it seems like it mixes up several ideas? There’s no Kalman Filtering at all in the final model. There are Kalman Filter implementations in the pymc-experimental statespace package here, along with an API to SARIMAX models. But you can’t do stochastic volatility models with KF, because the innovations aren’t normally distributed (as I think you know).

It also talks about SIR, but doesn’t ultimately do this either. PyMC does have support for importance sampling via pm.sample_smc, and some users have been doing work on online sequential importance sampling, but there’s nothing official for that. If your model implies a logp graph that is differentiable, it’s recommended to use NUTS instead (which is what the notebook ultimately does)

So the final PyMC model that is implemented is just computing the one-step prediction errors under the model in the logp function, which is a great way to do it. Another approach, though, would be to use a scan-based model together with pm.CustomDist. This is how I’d recommend you proceed. You can use this ARMA notebook and this GARCH notebook* as references on how to do the implementation. Basically you just need to write down the one-step generative process in a function, write another function that executes the scan, then give that to pm.CustomDist.

*The GARCH notebook is slightly out of date; so follow the ARMA one more closely to get the API right, as use the GARCH as just a reference for how to set up the inner function and scan.

2 Likes

Dear Jesse,

Thank you so much for the informative response! I really appreciate it.

You are right the traditional stochastic volatility does not fit in Kalman filter, but with some reparameterization tricks and chi-square approximation (detailed in Stochastic volatility with leverage:
fast and efficient likelihood inference
) you can actually use Kalman filter to realize FFBS algorithm.

I will try to change the pm.AR function and use the scan function followed by the ARMA and GARCH notebooks. I will test whether the ESS will be improved or not by this remedy.

Much appreciated.