Random number of switch points in regression

Dear all,

I am working on a regression problem with unknown number of switch points where the parameters change value. I have tested a model with 1 switch point which works very well on dummy data, as shown below. My problem is, the number of switch point is random (0+), is there any way to come up a model using pymc3 to simulate such cases? the last figure is a dummy data with 2 switch points. I have googled for a while and haven’t found anything useful. I have an idea of setting the number of switch point as an variable, however this does not work because the variable is a Tensorvariable that cannot be used in a for loop. Any suggestion of resolving this issue or more appropriate model for this problem is greatly appreciated. Thanks in advance.

The working model with fixed number of switch point:
with Model() as sw_model:

sigma = HalfCauchy('sigma', beta=10, testval=1.)

switchpoint = DiscreteUniform('switchpoint', 0, size) / size

# Priors for pre- and post-switch intercepts and slopes
pi1 = HalfNormal('pi1', 0.8)
pi2 = HalfNormal('pi2', 0.8)
b1 = HalfNormal('b1', 1)
b2 = HalfNormal('b2', 1)
decline_rate1 = HalfNormal('decline_rate1', 4)
decline_rate2 = HalfNormal('decline_rate2', 4)

decline_rate = switch(switchpoint<x, decline_rate2, decline_rate1)
pi = switch(switchpoint<x, pi2, pi1)
b = switch(switchpoint<x, b2, b1) +1e-5
t0 = switch(switchpoint<x, switchpoint, 0)


likelihood = Normal('y', mu= pi / (1 + decline_rate * b * (x-t0)) ** (1/b), sd=sigma, observed=y)

trace = sample(5000, njobs=4,progressbar=True)

The dummy data and simulation results:

The model for random number of switch point that is not working:
with Model() as sw_model:
# number of segments
n_segs = Geometric(‘n_segs’, p=0.5)
n = Deterministic(‘n’, n_segs)

# define switch points
tau = []
tau0 = x.min()
tau.append(tau0)
for n in np.arange(n-1):
    tau.append(DiscreteUniform('tau'+str(n+1), eval('tau'+str(n)), x.max()))
tau.append(x.max())

seg = np.zeros(x.shape)
for n in range(n_segs):
    if n==0:
        seg[(x>=tau[n])&(x<=tau[n+1])] = n
    else:
        seg[(x>tau[n])&(x<=tau[n+1])] = n    
seg = seg.astype(int)        
pi = HalfNormal('pi', sd = 0.8, shape = n_segs-1)
b = HalfNormal('b', sd = 1, shape = n_segs-1)
d = HalfNormal('d', sd = 4, shape = n_segs-1)


sigma = HalfCauchy('sigma', beta=10)

x_bar = x - tau[seg.astype(int)]
likelihood = Normal('y', mu= pi[seg] / (1 + decline_rate[seg] * b[seg] * x_bar) ** (1/b[seg]), sd=sigma, observed=y)

trace = sample(500, njobs=4,progressbar=True)

The dummy data with multiple switch points:

Hmm this sounds like a pretty difficult problem. My approach would probably fitting many model with different number of switch points (3, 4, …, 5, 6), and use a model selection (LOO or WAIC) to select the best fitted one.

Thanks, I think I will have to use this method then.