Limit range of parameters in hamiltonian monte carlo

My potential energy function is only defined for 0 < x < 1.
The prior distribution of x is the uniform distribution.
How to limit the range of x in Hamiltonian Monte Carlo?

Thanks.

The easiest way I can think of is to inherent from potential class and modify the random method to be a uniform, see eg:

then you can pass the Potential Class you defined to initialize HMC.

I thought I can define the prior distribution by the following from https://docs.pymc.io/notebooks/blackbox_external_likelihood.html

with pm.Model():
    # set priors on model gradient and y-intercept
    m = pm.Uniform('m', lower=-10., upper=10.)
    c = pm.Uniform('c', lower=-10., upper=10.)

    # create custom distribution
    pm.DensityDist('likelihood', my_loglike,
                   observed={'theta': (m, c), 'x': x, 'data': data, 'sigma': sigma})

    # sample from the distribution
    trace = pm.sample(1000)

The main difficulty is to limit the range of the parameters during the sampling. I plan to use NUTS because I have defined a custom theano operation for the potential energy and the gradient of the potential energy.

To limit the range of the parameters, I can clip the value of the parameters during every step of the Hamiltonian Monte Carlo. How to do that?

Does bounded variable already do this for me?

Alternatively, can the following steps work?

  1. pm.sample for n steps, starting from last point in previous trace if available
  2. clip the values in the trace returned by pm.sample
  3. repeat.

What happens if you put in an

x = pm.Uniform('x', lower=0, upper=1)

This works for a toy model:

x_sm = np.random.normal(size=280)
y_sm =  -4 * x_sm + 0.45 + np.random.normal(size=280)

with pm.Model() as mod:
    beta_pr = pm.Normal('beta', 0., 5.)
    err = pm.HalfNormal('err', 2.)
    alpha_pr = pm.Uniform('alpha', 0., 1.)
    alpha_pot = pm.Potential('ap', -tt.arcsin(2*alpha_pr-1) - np.pi/2)
    y_pr = pm.Deterministic('y_pr', alpha_pr + x_sm * beta_pr)
    tr_pr = pm.sample(200, tune=1000, chains=9, cores=3)
    lik = pm.Normal('lik', mu=y_pr, sd=err, observed=y_sm) 
    tr = pm.sample(200, tune=1250, chains=9, cores=3)

The potential here is only defined for 0 ≤ alpha_pr ≤ 1 and I have no issues in sampling either prior or posterior

Undefined values in the potential energy function can cause problems:

Just to be safe, I am thinking about patching the singularities in the potential energy function for x = 0 and x = 1. The gradient of the potential energy function has many (2000) terms like 1 / x and 1 / (x - 1).

Before calculating the potential energy and the gradient, I would do the following within the custom theano operation that defines potential energy. If |x| is near 0, set x to 0.01. If |x - 1| is near 0, set x to 0.99.

Oh I think I misunderstood your question - by potential energy function you meant the one in the model, but not the one during HMC right.
In that case yes using a bounded variable should work - although the logp is improper which means it is likely not integrated to 1.