I would like to set the start value explicitly when calling pm.sample() using start=, but it seems like the tuning does not use the input. For example, for this toy model:
with pm.Model() as model:
lnx = pm.Uniform('lnx', 0., 5.)
infdata = pm.sample(tune=100, draws=0,
start={'lnx': 1.},
init='adapt_diag',
return_inferencedata=True,
discard_tuned_samples=False,
chains=1)
I would expect
infdata.warmup_posterior.lnx.values[0, 0] == 1.
but it instead appears to be a randomly generated number.
Reposting here for visibility of “code doing the right thing” (on github here):
with pm.Model() as model:
lnx = pm.Uniform('lnx', 0., 5.)
infdata = pm.sample(tune=100, draws=0,
start={'lnx_interval__': lnx.transformation.forward(1.).eval()},
init='adapt_diag',
step=pm.NUTS(step_scale=100),
return_inferencedata=True,
discard_tuned_samples=False,
chains=1)
Note that you have to override the transformed variable, transform your variable, and that the initial point is not recorded (unless it is rejected by the Metropolis correction), so you have to mess with it to guarantee a rejection.
As I said in the issue – probably not a bug, but worth an apology to those who want to use it! Happy to help organize suggestions for improving the API into issues.
Could definitely use a code comment, but the first one is if there is a step size dictionary provided, it is turned into a list of initial points. That’s news to me that step is eventually a list of dicts, but I guess you could supply something like start=[{'x': 1., 'y': 2}, {'x': 4., 'y': -3.}] if you wanted.
In the second spot it is just after if start is None:, which is the default case, and will initialize start = {}. I think you could get rid of that second if isinstance(start, dict) (since the only way start could be None is if it was initialized on line 520), but it is probably there to be cautious.