I am using state space models for time series modeling using the following model
For modeling state space I am using scan function
x0_ar = pm.Normal("xo_ar", 0, sigma=1, initval = init_ar, shape=(latent_factors), rng=rng)
sigmas_Q_ar = pm.InverseGamma('sigmas_Q_ar', alpha=3,beta=0.5, shape= (latent_factors), rng=rng)
Q_ar = pt.diag(sigmas_Q_ar)
def step_simple(x, A, Q, bais_latent):
innov = pm.MvNormal.dist(mu=0.0, tau=Q)
next_x = pm.math.dot(A,x) + innov + bais_latent
return next_x, collect_default_updates( [next_x])
ar_states_pt, ar_updates = pytensor.scan(step_simple,
outputs_info=[x0_ar],
non_sequences=[A_ar, Q_ar, bais_latent],
n_steps=T,
strict=True)
assert(ar_updates)
model_minibatch.register_rv(ar_states_pt, name='ar_states_pt', initval=pt.zeros((T, latent_factors)))
ar_states = pm.Deterministic("ar", pt.concatenate([x0_ar.reshape((1,latent_factors)), ar_states_pt], axis=0))
ar_states goes as input to f function. When I run the inference using ADVI, the estimates of ar_states do not follow the state space dynamics and they just fit the training data in whatever way. Am I missing anything in the training process? How to make sure that the ar_states follow the dynamics. Any suggestion @jessegrabowski?
Edit:1 Even when A is 0, the states are very well-fitting data. I would expect them to follow the distribution of noise+bias_latent, why is that?
Edit2: Out of T time points, I have data for T/2, so the model is inferring states super well for T/2 time points, but for rest of them, the values hover around the initial of the registered variable when A is an identity matrix. Below is an example where A is an identity matrix, so the time series should use just the previous time point and add some noise to it, but it is fitting to the data for half of the points and for rest it is converging towards initial of the registered random variable.