Some problems about pytensor

To set values to a tensor you can use pt.set_subtensor. For example:

import pytensor.tensor as pt
x = pt.zeros((3, 3))
x = pt.set_subtensor(x[0, 0], 3)

x.eval()
>>> Out: array([[3., 0., 0.],
                [0., 0., 0.],
                [0., 0., 0.]])

As for the model, it seems like it’s working fine? You used the mean of the posterior to make that plot? I would not suggest this; instead use all the draws from pm.sample_posterior_predictive to see the whole distribution over possible data (or at least compute the HDI and use that)

The MCMC sample diagnostics are all look good (r_hat is 1, chains are mixing, no divergences…), so there’s not really any reason to believe the sampler “converged to the wrong result”. You could use the parameters in a numpy/scipy conv1d function if you’re worried about a computation mistake there. Otherwise, it means that your model is misspecificied. You’re using a causal convolution (the past values are used to compute the future value) so it’s correct to see this shifting forward in your outputs.