Using a deterministic function for the mean of beta seems appropriate but it’s unclear how to combine the two concepts (deterministic functions of the mean of Guassian Random Walk). Any thoughts?
I’m not sure you want a GaussianRandomWalk here, or at least one with a non-zero mu. Mu for the random walk is the innovation drift. Is there a source paper/textbook from which you got this model?
If you’re looking for something like the mean-reverting vol as defined in formula (2) of section 2.1 here, you might try something like this:
Bayesian Analysis of Stochastic Betas@DanWeitzenfeld Here’s the paper. I attempted to implement a Gibbs Sampler using the author’s derivation of conditional posteriors but it was extremely sub optimal and I was hoping to make use of PyMC3.
I think your sketched out function is incorrect because each beta_t depends on beta_t+1 from previous iteration and beta_t-1 from current iteration. The updating process is has a snaking effect, I believe.
If I understand your logp function, it calculates err by shifting the index of x by 1 to the left, whereas the model requires stepwise drawing beta_t for each t? As in, you need to update conditional posterior for each t then sample and use the new sampled value to update and draw from t+1 conditional posterior? Am I misunderstanding how logp works within the PyMC3 framework? That’s where I found the bottleneck in optimizing the model. I will try to implement using your code and try with theano.scan to see if I get similar convergence.
I think maybe the confusion here is that the model I’m suggesting is like a Kalman smoothing, in that it estimates based on the entire time series, and you’re looking for a Kalman filtering that estimates only each beta_t only using data available at t. That said, I skimmed the paper, and I didn’t see anything that made me think they were filtering rather than smoothing. Let me know if you disagree.
Using Kalman smoothing leads to a single estimate of beta for entire time series in a given iteration? Filtering vs smoothing are new concepts to me so I’ll review and compare to paper. Alternatively, I applied a similar approach to your StandardStochasticVol class where I modified GuassianRandomWalk class to account for a more complicated mu term (in this case mu = alpha + delta * (beta_tm1 - alpha)).