MvGaussianRandomWalk prediction


I am trying to model two time series (columns of observed) as Gaussian Random Walks with a drift that is a function of a linear regression. Everything works fine except when it comes to inspection and prediction.

with model:
    packed_L = pm.LKJCholeskyCov('packed_L', n=shape, eta=2, sd_dist=pm.HalfCauchy.dist(2.5))
    L = pm.expand_packed_triangular(shape, packed_L)
    Σ = pm.Deterministic('Σ',
    one_mu = mu_variable(f_data, 'one') # linear combination of regressor and regressor beta
    two_mu = mu_variable(f_data, 'two')
    mu = T.stack([eps_mu, pe_mu]).T
    obs = pm.MvGaussianRandomWalk('obs', mu=mu, chol=L, observed=target)
    trace = pm.sample(3000, cores=1)

KeyError: 'Unknown variable obs'

I would like to visually inspect how the model performed. In the stochastic volatility model the author simple inspect the trace. However, in this case, the trace does not have an “obs” variable.

How can I check what the model predicted against the observed time series?

Many thanks,


Usually you can do sample_ppc and display it just like a trace, but in this case pm.MvGaussianRandomWalk does not have a random method, so it might be a bit difficult…

Thank you for your answer.

Do you have an idea on how I could do that? From what I could understand, MvGaussianRandomWalk implements a MvNormal under the hood (link).

Could it be possible to leverage the random method of MvNormal? For example, MvGaussianRandomWalk could be extended with:

def random():
    return self.innov.random()

The goal is to return samples of the innovation and standard deviation.

Would you see another way ?

Many thanks,


In principle you can write a custom random generation function, which take posterior sample as input, and output a stochastic prediction. However, given that the distribution is a random walk, when you generate prediction (posterior prediction samples) it wont have the usual nice property eg a region of uncertainty around the actual observed.