Apparently inconsistent posteriors in autoregressive model

Thanks Jesse! The reference to the non-centered formulation apart from being very elucidating may be helpful down the path as I’ve read that it has sampling implications in hierarchical models that can be relevant in case I encounter convergence issues following that path.

I now see that I’m setting priors for the traffic_0 as opposed to setting traffic_0 as a LogNormal distributed variable with parameters mu and sigma (at least I was right in that it was something very basic that I was missing!). As I understand it, the blue does represent the posterior distribution of innov (they are not strictly speaking innovations but the distribution towards which the AR model tends overtime) in the AR model, the relevant difference with respect to traffic_0 being the use of .dist I presume.

That brings me to the following related question: is there a way to tie traffic_0 and innov so that they follow the same distribution? I’ve been looking around and found this Frozen or non-adaptive distribution for random variable - #10 by GBrunkhorst which lead me to try:

import pytensor.tensor as tt
rand_stream = tt.random.utils.RandomStream()
...

    traffic_0 = pm.Deterministic("traffic_0", pm.math.exp(mu + sigma * rand_stream.normal(0, 1, size=(lags))), dims=("lags",))

but it fails. Then I read that worked for v3 and the way to go about it in v5 seems to be this Incorrect step assignments with custom step function - #5 by ricardoV94 which looks quite involved (from my perspective of limited familiarity with the library). I generally understand these complications as a sign that what I’m trying to do doesn’t have much sense to begin with; but I can’t help finding it reasonable to impose some structure on the model forcing these variables to be identically distributed.