Series of posts on implementing Hamiltonian Monte Carlo

I’m trying to understand how HMC works and found the implementation from scratch really useful.

What it’s missing, however, is the implementation of the negative_log_prob function that’s called in the main hmc function.

for p0 in momentum.rvs(size=size):
        # Integrate over our path to get a new position and momentum
        q_new, p_new = leapfrog(
            samples[-1],
            p0,
            dVdq,
            path_len=path_len,
            step_size=step_size,
        )

        # Check Metropolis acceptance criterion
        start_log_p = negative_log_prob(samples[-1]) - np.sum(momentum.logpdf(p0))
        new_log_p = negative_log_prob(q_new) - np.sum(momentum.logpdf(p_new))
        if np.log(np.random.rand()) < start_log_p - new_log_p:
            samples.append(q_new)
        else:
            samples.append(np.copy(samples[-1]))

I understand (I think) that this function should be calculating the posterior log probability that gets updated as sampling progresses through the chains.

However, I’m not conceptualizing how this function would look.
A skeleton code or a link would be really helpful. Thank you!

1 Like