Hi everyone. I am working on a finance problem. I have a time-series for each stock that tells me how much of it I should buy or sell on any given day. The problem is that the signal is noisy and if I react to the noise, I am getting punished by the brokerage commissions without any perceptible reward. It seems like I need something to tell me whether the signal has really changed (and if so, then by how much). Then I can make an intelligent decision on whether I should adjust my portfolio, and if the expected reward exceeds the transaction costs.
In the Bayesian context, I see multiple references to this 2007 paper by Adams and MacKay, in which they model the “run length” of a time-series regime. The paper is elegantly summarized in this blog post, as well as this talk, which also proposes a computationally efficient way to calculate the most likely path through the time-series regimes. I believe the speaker is referencing this GitHub, authored by the aforementioned blogger that contains a numpy/scipy implementation of the Adams / MacKay algorithm.
Anyway, I don’t have a strong quantitative background, so it is hard for me to understand the basic theoretical underpinnings, let alone to implement the code in PyMC3. However, I thought that online changepoint detection must be a frequent enough use case that perhaps others, who are more versed in Bayesian statistics, would be interested in taking a look and weighing in on what the PyMC3 framework would look like. I also thought that for my specific problem I can start with a simpler solution, like GP smoothing. At this point, I feel that any advice from you guys would be super helpful. Thanks!