Behrens' Bayesian Learner Model : How to share parameters between steps?

I write code like this:

p = np.repeat([.75, .75, .75, .25, .75, .25], 18)
env= stats.binom.rvs(1, p)
choose = list(map(bool, env))

observed_data = pd.DataFrame({"choose": choose, 
                              "index" : range(len(env))})

with pm.Model() as bayesian_lerner_model:
    k = pm.Normal("k", mu = 1, sigma = 1, testval = 0.6)
    k_ = pm.Deterministic('k_cap', pm.math.exp(k))
    v = pm.GaussianRandomWalk("v", mu = 0.7, sigma = k_, testval = 0.05, shape = len(env))
    v_ = pm.Deterministic('v_cap', pm.math.exp(v))

    r = []
    for ti in range(len(env)):
        if ti == 0:
            # Testvals are used to prevent -inf initial probability
            r.append(pm.Beta(f'r{ti}', 1, 1))
        else: 
            w = r[ti-1]
            k = 1 / v_[ti-1]
            r.append(pm.Beta(f'r{ti}', alpha=w*(k-2) + 1, beta=(1-w)*(k-2) + 1, testval = 0.5))

    r = pm.Deterministic('r', pm.math.stack(r))
    y = pm.Bernoulli("y", p = r, observed = env)
    
    trace = pm.sample(return_inferencedata=True)

It seems right (maybe?) and can work with fewer steps. But when I add more steps, I still meet -inf logp.