I’m using pymc3 to model time series in a state-space framework. In order to make forecasts of the future, I find myself essentially re-coding my pymc3 model in python so as to roll the model forward and simulate possible futures.
I read the section “Prediction” in the docs, which describes how to use shared variables to plug in different values for the independent variables in a logistic regression. But I don’t think that paradigm works for a time series with hidden states, because I need the last “observed” value of the hidden states in order to roll the model forward.
As an example, think of the AR1 distribution - in order to forecast, you need the last value of x
. So to simulate a possible future, you’d need to do something like:
point = trace.point(j)
k, tau_e = point['k'], point['tau_e']
last_x = point['x'][-1]
x = []
for i in range(horizon):
new_x = k*last_x + np.random.normal(loc=0, scale=tau_e**-.5)
x.append(new_x)
last_x = new_x
…and you see what I mean about re-coding the model.
So my question is: am I missing something, or is this par for the course with time series models?