Linear models with ARMA residuals

Here is a link to a paper I co-wrote which uses PyMC3 to solve linear models with residuals modelled by an autoregressive moving average or ARMA(p, q) distribution, that is, by any stationary distribution according to Wold’s Theorem.

Quantifying influences on intragenomic mutation rate

This may be of interest to others as it allows values of p and q > 1, which do not seem to be implemented as native PyMC3 distributions at this point. Code is available at:

Note that these models require the derivation of reasonably informative prior distributions to converge (e.g. using statsmodels). This is likely because using p+q ARMA parameters is redundant - the number of parameters can be reduced to m=max(p, q+1) ( Chib, Siddhartha, and Edward Greenberg. “Bayes inference in regression models with ARMA (p, q) errors.” Journal of Econometrics 64.1-2 (1994): 183-206.)

1 Like

Thanks for sharing! Any reason you are using Metropolis to sample some parameter? Using NUTS should work out of the box.
For reference, I have been working on some ARIMA models, with reference mostly from the Stan implementation in varstan/Sarima.stan at master · asael697/varstan · GitHub

I dont think this understanding is correct - you still need p + q parameters, but you can express it as m=max(p, q+1) linear functions when express in the state space form (this is how StatsModels doing under the hood). For more information see chapter 3.4 of Time Series Analysis by State Space Methods

My recollection is that I found NUTS very slow compared to Metropolis for the ARMA parameters - I made a comment on Issue 1304. As far as parameterisation was concerned, I was hoping that some more parsimonious approach would allow models using uniform prior distributions to converge. Have you come across similar issues with prior distributions in your ARIMA work?

The autoregressive parameter need to be somewhat regulated otherwise you might have a degenerate time series - I am using either Uniform(-1, 1) or Normal with a small scale as prior and those tend to work well.