Hi,
Let’s say that I have an equation (one of many):
p_0 = kappa * g + p_1 * beta.
kappa and beta are parameters, g is unobservable and I have data on p_0, however, the p_0 also depends on expectations of p in the period ahead (p_1). Is it possible to:
a) initialize pm.model(),
while drawn_num < draws:
b) sample (a single time),
c) predict what p_1 is given current parameters/unobservable
REPEAT
My first instinct is that there is no reason you can’t write an equation in your model to do step (c) “in situ”, without having to resort to computation in the sampler step. What keeps you from just introducing two variables into your model, P_0 and P_1, and linking them via the law of motion you wrote there?
There may also be ways to avoid the expectation, or to handle it analytically. Is there any reason why you can’t back-shift that equation one time-step and estimate P_t = \frac{1}{\beta}P_{t-1} - \frac{\kappa}{\beta}g, so that it no longer depends on the expectation and instead only on observed quantities?
Alternatively, and using the same rearrangement, you could just write down the expectation of P_1 and include it in your model as a pm.Deterministic
equation. I believe it should look something like \mathbb E[P_1 | t] = \frac{1}{\beta}P_0 - \frac{\kappa}{\beta}\mathbb E[g | t], so you just need to know the expectation of g. That might be knowable given your model (for example, if you have a law of motion for g and it has Gaussian innovations).
To directly answer the question though, you can inject additional computations by adding custom sampler steps. This thread, and the code linked by @ckrapu therein, should be helpful.
2 Likes