Ok, I don’t know if I correctly understood what you are proposing, but I gave the following a try, and it seems to work (that is, it gives the same estimates as my version with pm.Potential). It’s basically an attempt at writing my own AR(1) model:

```
def build_model():
y1 = shared(np.array([10, 13, 11, 14, 11, 12, 9, 11, 10, 12], dtype=np.float32))
y2 = shared(np.array([15, 10, 13, 11, 14, 11, 12, 9, 11, 10], dtype=np.float32))
def likelihood(y1, y2):
nu_t = mu + phi * y2
err = y1 - nu_t
return pm.Normal.dist(0, sd=sigma).logp(err)
with pm.Model() as ar_model:
sigma = pm.HalfNormal('sigma', 5.)
phi = pm.Normal('phi', 0., sd=2.)
mu = pm.Normal('mu', 0., sd=10.)
pm.DensityDist('like', likelihood, observed={'y1': y1, 'y2': y2})
return ar_model
```

So by pre-lagging the observations and passing each lag of the original vector y separately to the likelihood function I’m able to write logps that reference all lags as part of the calculation. Does this code make sense? The part I’m not clear about is why is my likelihood function able to reference phi, mu, and sigma even though it is declared outside of the model – is that supposed to be the case?