Reuse tuning for next sampling call

I am not sure whether I am doing it correctly (it still does not work):

with pm.Model() as model2:
    # reset model
    step2 = pm.NUTS(potential=potential, max_treedepth=11)
    step2.step_adapt._log_bar = np.log(trace['step_size_bar'][-1])
    trace2 = pm.sample(draws=1000, step=step2, tune=0, cores=n_chains, start=start)

A few things I notice here:

  • The step sizes in step.step_adapt.stats() are never updated. Could it be that the step object is copied internally?
  • In a similar vein: The step size I enter manually (above) is not used. I can see this since step.step_adapt._tuned_stats is always [].

edit: Since this procedure is basically what is done in init_nuts in sampling.py. How is this working correctly?