There actually are divergences (7-8) per chain. I think I might be instantiating the AR1 wrong. In any case, how might I generate y values from the learned parameters? Is it something like:
with ar1_mat1:
posterior_samples = pm.sample_posterior_predictive(trace,var_names=['tau','theta','center'])
and then y:
y[t] = y[t-1]* theta + np.random.normal(loc = center, scale = np.sqrt(1/tau))