Getting rid of for loops for recursion of deterministic variables

I have the following code

import pandas as pd
import pymc3 as pm
import matplotlib.pyplot as plt
import numpy as np
import arviz as az
import theano.tensor as tt
from scipy import optimize

seed = 68492
np.random.seed(seed)

basic_model = pm.Model()

with basic_model:

    lambda1 = pm.Gamma("lambda1", alpha=0.001, beta=0.001)
    pw =  pm.Beta('pw', 1.0, 1.0)
    pu =  pm.Beta('pu', 1.0, 1.0)
    
    mu = [1.0]*len(x['Length'])
    Y_obs = [1.0]*len(x['Length'])

    p = pm.Bernoulli("p", pw*x['Binary']+pu*(1-x['Binary']), shape=(len(x['Length'])) )
    z = pm.Bernoulli("z", p, shape=(len(x['Length']))) 

    mu[0] = pm.Deterministic('mu_0', lambda1*x['Length'].values[0]
    Y_obs[0] = pm.Poisson("Y_obs_0", mu=(1-z[0])*mu[0]+0.0001, observed=x['Count'].values[0])#, shape=len(x['Length']))

    for i in range(1,len(x['Length'])):
        if x['Length'].values[i-1] == x['Length'].values[i]:
            mu[i] = pm.Deterministic('mu_%i' % i, mu[i-1]*z[i-1] + lambda1*x['Length'].values[i])
        else: 
            mu[i] = pm.Deterministic('mu_%i' % i, lambda1*x['Length'].values[i]

        Y_obs[i] = pm.Poisson('Y_obs_%i' % i,
            mu = (1-z[i])*mu[i] + 0.0001,
            observed = x['Count'].values[i] )

    trace = pm.sample(7000, tune=2000, cores=1, return_inferencedata=True)

print(az.summary(trace, kind="stats"))

print(az.rhat(trace, var_names=['lambda1', 'pw', 'pu'], method="folded"))

mu[i] is the mean for the i^th data point, and it depends on z[i-1] and mu[i-1]. This way, the data size is very large, so I get the following error:
bracket nesting level exceeded maximum of 256

It works for smaller data, though.
How do I break the recursion for mu and Y_obs free from the for loop?

Edit: I tried this for Y_obs, but I don’t know how to get it for mu.

Y_obs = pm.Poisson('Y_obs', mu = ((1-z) * mu).sum(axis=1) + 0.0001, observed = x['Count'].values, shape=len(x['Length']) )

I have exactly the same problem where sample[i] depends on sample[i-1]. Have you been able to find a solution? Anyone else?

Perhaps it has to do with the way that the model is formulated (using the for loop and string interpolation to generate each data point). However I can’t immediately think of a good way to do it differently and I see no obvious way of re-formulating this nested system in terms of higher dimensional distributions.

Actually everything works very well for small data (10 or 20 samples) but if I go any higher then I quickly run into errors and unstable solutions.

In general, need to use a scan to do recursive computation, see here for documentation.

Although in the original post it looks like you could maybe just use a shift operation:

pm.Deterministic('mu', (mu * z)[:-1] + lambda1 * x['Length'].values[1:])