Pause and resume training a model that can change during training

  1. Train.
  2. Pause.
  3. Change model (shift data up by 1)
  4. Repeat.

Result:

The model is y = mx + c. The parameters are reinitialized during every resumption, which is probably bad.

New main function:

def main():
    # set up our data
    N = 10  # number of data points
    sigma = 1.  # standard deviation of noise
    x = np.linspace(0., 9., N)

    mtrue = 0.4  # true gradient
    ctrue = 3.  # true y-intercept

    truemodel = my_model([mtrue, ctrue], x)

    # make data
    np.random.seed(
        716742)  # set random seed, so the data is reproducible each time
    data = sigma * np.random.randn(N) + truemodel

    ndraws = 100  # number of draws from the distribution
    nburn = 10  # number of "burn-in points" (which we'll discard)

    # use PyMC3 to sampler from log-likelihood
    trace = None
    for i in range(5):
        # create our Op
        logl = LogLikeWithGrad(my_loglike, data + i, x, sigma)
        with pm.Model() as opmodel:
            m = pm.Uniform('m', lower=-10., upper=10.)
            c = pm.Uniform('c', lower=-10., upper=10.)
            theta = tt.as_tensor_variable([m, c])
            pm.DensityDist('likelihood', lambda v: logl(v),
                           observed={'v': theta})
            trace = pm.sample(ndraws, tune=nburn,
                              discard_tuned_samples=True, trace=trace,
                              start=None)

    # plot the traces
    print("trace_len", len(trace))
    _ = pm.traceplot(trace, lines={'m': mtrue, 'c': ctrue})

    # put the chains in an array (for later!)
    samples_pymc3_2 = np.vstack((trace['m'], trace['c'])).T
    plt.show()