It’s quite a strange error, because:
-
model.debug()finds no errors -
pm.drawworks as expected - the compiled logp and dlopg functions work as expected
-
pm.find_MAPworks as expected
So you really got me. If you use pytensor.dprint and look at the compiled logp graph, the input with zero shape (TensorType(float64, (?,))) is p. My guess is that the data in p is changing, which causes problems for the shape inference of the object. But that doesn’t explain why it only fails when you go to sample.
A priori, the switch way is more correct anyway – nested switches are not uncommon in the codebase. You could beautify it a bit by importing switch from pm.math, and ditching the deterministic (do you need to inspect the trace for p anyway?). But @ricardoV94 for an expert opinion of why the slicing way doesn’t work?
Another approach would be to use an AR(p), with p quite large, together with shrinkage priors for automatic order selection. This is common in the bVAR literature, but no reason why you couldn’t do something similar here.