Leapfrog, adpatve stepsize, and time reversibility

Does the leapfrog step use adaptive step-size by default during both the tuning and sampling phases?
Is the leapfrog step with adaptive step-size time-reversible?

Nope, during sampling phase all tuning is off

within one HMC or NUTS step the step size is constant, so in that regard it is time-reversible still.

Is it ok if the gradient of my log-likelihood can change a lot during sampling?

Some papers talk about integrators that are time-reversible and also use adaptive step-size. Are those integrators too slow?

Why we can’t use adaptive step-size during the sampling phase?

You mean change even if you evaluate on the same input? I dont think NUTS works well with stochastic log-like

You mean like a RMHMC? In that case step-size are adaptive to the local geometry, but does not get feedback from the acceptance rate. IIUC, if you adjust the step size by using acceptance rate as feedback it would break detail balance.

The log-likelihood function is deterministic. The gradient of the log-likelihood can change a lot when the gradient is evaluated at different points.

I guess the acceptance rate depend on the error of the integrator. If that is true, then using a step-size that adapts to local geometry would indirectly adapt the step-size to the acceptance rate.

I remember the RMHMC paper says it needs the inverse and gradient of the fisher information matrix. That seems slow. tfp says they can do a form of RMHMC.

I was thinking about http://www.unige.ch/~hairer/preprints/revstep.pdf

That’s almost always the case.