Oh I didn’t mean how the logp is implemented when I said centered. So I probably misunderstood your point. I meant that the logp function expects absolute “concrete” values, not innovations as inputs (which is all that would matter for sampler efficiency).
You probably meant something like whether internally we compute the logpdf inside a Scan graph by traversing the values and accumulating individual logpdf terms or whether we do a single vectorized call (directly, if that’s possible, or after “differencing” the values, whatever that means for the given timeseries)
The Scan in the rv_op is only concerned with generating random draws. You shouldn’t need to worry about RNGs if you want to compute the logpdf using (another) Scan inside the logp function. In that case you would probably be calling pm.logp(Normal.dist(some_mu, some_sigma), some_value) inside the Scan body, which will return a tensor expression without any RNGs whatsoever (it’s just the normal_logpdf expression).
The way you compute the logp, does not affect sampler performance directly, the geometry is exactly the same, because the inputs and outputs are exactly the same. You may, however, find out that the Scan implementation (or it’s gradient) has a different speed or numerical precision than the vectorized implementation.