For a changepoint time series analysis I was checking a modelling technique on PYMC3. Timeseers.
The Basic idea is to have an indicator matrix A of dimension: “Length of the time series” x “changepoints”.
I created an artificial time series with two changepoint after 1/3 and 2/3 of the time series
with pm.Model() as ts_model:
#the initial growth and innteresection is k and m respectively
k = pm.Normal('k', mu=0, sigma=1)
m = pm.Normal('m', mu=0, sigma=1)
# after each change point the growth rate may change with delta
delta = pm.Laplace('delta', mu=0, b=0.01, shape=n_changepoints)
# reflect the slope with new intersections
gamma = delta * -s
growth = k + pm.math.dot(A, delta)
offset = m + pm.math.dot(A, gamma)
trend = growth * t + offset
error = pm.HalfCauchy('sigma', beta=0.5)
pm.Normal('observation', mu=trend, sigma=error, observed=trend_rnd)
idata = pm.sample(tune=1000)
The model runs fine but it doesn’t detect any changepoint. The delta array should react on the change in direction of the time series. But it doesn’t. I have the feeling that the matrix operation is somehow applied wrongly by me.
I suspect that it’s because your priors are strongly informative. If you loosen up the b on the delta parameters, things seem a bit better. Not sure if that’s the only problem, but at least the posterior predictive samples look better at that point (and the posterior delta values are far from the prior`).