# LOO/WAIC metrics from pm.Potential based model?

I’m running a model inspired from the arma_example.py (https://github.com/pymc-devs/pymc3/blob/master/pymc3/examples/arma_example.py) code, using pm.Potential.

I’d very much like to get the LOO/WAIC metric for my model, to compare it with some other model, however when I try to do that it says “The model does not contain observed values”. Is there a workaround for this?

1 Like

Depending on how you implement the potential, if it is similar to DensityDist, you should rewrite it into a custom distribution. If it does not evaluated on the observed easily, it would be difficult. Recall that LOO/WAIC are not straightforward with multivariate and multi observed.

Ok, so by using DensityDist instead of Potential I would be able to use observed. This implies defining a custom likelihood function of the form:

def likelihood(value)

where value is a scalar that corresponds to every single value, passed one at a time to the function, from my observed data. Correct?

Now, because I’m interested in auto-regressive processes, I’d like to calculate the likelihood based on an arbitrary number of lags at the same time (I suppose – unless there is some math I’m missing that tells me I can do this without requiring all the lags of my AR(MA) process at once). Is that possible with a DensityDist and a custom likelihood function? In general is it possible to define custom timeseries-like probability distributions with DensityDist that rely on past lag values?

Yes, ideally your function returns a logp for each value (or each slice of value if it is a multi-dimensional RV), pymc3 will compute the sum of logp internally.

If you have a set of number of lags (e.g., you want to compute lag=0, lag=1, … lag=5), the easiest I can think of is to have 6 different logp and then sum them together. If the number of lag is a random variable or can change dynamically during inference, then things becomes much more difficult.

Ok, I don’t know if I correctly understood what you are proposing, but I gave the following a try, and it seems to work (that is, it gives the same estimates as my version with pm.Potential). It’s basically an attempt at writing my own AR(1) model:

``````def build_model():
y1 = shared(np.array([10, 13, 11, 14, 11, 12, 9, 11, 10, 12], dtype=np.float32))
y2 = shared(np.array([15, 10, 13, 11, 14, 11, 12, 9, 11, 10], dtype=np.float32))

def likelihood(y1, y2):
nu_t = mu + phi * y2
err = y1 - nu_t

return pm.Normal.dist(0, sd=sigma).logp(err)

with pm.Model() as ar_model:
sigma = pm.HalfNormal('sigma', 5.)
phi = pm.Normal('phi', 0., sd=2.)
mu = pm.Normal('mu', 0., sd=10.)

pm.DensityDist('like', likelihood, observed={'y1': y1, 'y2': y2})

return ar_model
``````

So by pre-lagging the observations and passing each lag of the original vector y separately to the likelihood function I’m able to write logps that reference all lags as part of the calculation. Does this code make sense? The part I’m not clear about is why is my likelihood function able to reference phi, mu, and sigma even though it is declared outside of the model – is that supposed to be the case?