Is there a better way to implement this model?

I have some data v and I wish to forecast the next v. I have the following model

v_i = \Phi_i \epsilon_i
\Phi_i = \omega + \alpha v_{i-1} + \beta \Phi_{i-1}

where the priors for \omega, \alpha, \beta are normal with \mu = 0 and \sigma = 5. They are also bounded such that \omega > 0, \alpha \geq 0, \beta \geq 0, \alpha + \beta < 1. \epsilon_i is exponentially distributed with E(\epsilon_i) = 1.

So I have managed to find a way to implement this, by rewriting \epsilon_i in terms of v_i, v_{i-1}, \epsilon_{i-1} and implementing it as follows:

import pymc3 as pm
from theano import scan, shared

def build_model():
    v = shared(v_array)
    with pm.Model() as model:
        BoundedNormal = pm.Bound(pm.Normal, lower=0.0)
        BoundedAlpha = pm.Bound(pm.Normal, lower=0.0, upper=1.0)
        omega = BoundedNormal('omega', mu=0, sigma=5.0)
        alpha = BoundedAlpha('alpha', mu=0, sigma=5.0)
        BoundedBeta = pm.Bound(pm.Normal, lower=0.0, upper = 1 - alpha)
        beta = BoundedBeta('beta', mu=0, sigma=5.0)

        err0 = v[0] / omega

        def calc_next(last_v, this_v, err, omega, alpha, beta):
            denom = omega + alpha * last_v + beta * (last_v / err)
            return this_v/denom

        err, _ = scan(fn=calc_next,
                      sequences=dict(input=v, taps=[-1, 0]),
                      non_sequences=[omega, alpha, beta])

        pm.Potential('like', pm.Exponential.dist(1).logp(err))
    return model

model = build_model()
with model:
    trace = pm.sample(draws=1000,

However, this doesn’t feel like the right way to use pymc3 and was wondering if anyone could point me in the right way to implement the model. Furthermore, this method has some issues

  1. If v contains zeroes then it fails with a mean of empty slice runtime error and bad energy
  2. It is verrrrrry slow, taking hours to run on a v of length 9000

Any pointers guys?