HMC sampling in tensorflow

My basic understanding of HMC comes from here.

Reference: https://www.tensorflow.org/api_docs/python/tf/contrib/bayesflow/hmc/chain

# Sampling from a standard normal (note `log_joint()` is unnormalized):
from tensorflow.contrib.bayesflow import hmc
class HMCChainSampling():    
    def __init__(self):
        self.chain, self.acceptance_probs = hmc.chain(1000, 0.5, 2, tf.zeros(10), log_joint,
                                        event_dims=[0])
    # Discard first half of chain as warmup/burn-in
        self.warmed_up = self.chain[500:]
        self.mean_est = tf.reduce_mean(self.warmed_up, 0)
        self.var_est = tf.reduce_mean(tf.square(self.warmed_up), 0) - tf.square(self.mean_est)
        self.init = tf.global_variables_initializer()
        
    def log_joint(x):
        return tf.reduce_sum(-0.5 * tf.square(x))
    
    def sampling(self):
        sess = tf.Session()
        
        with sess.as_default():
            self.init.run()
            mean_est, var_est = self.mean_est.eval(), self.var_est.eval() 
        sess.close()
        return mean_est, var_est

hmc = HMCChainSampling()
mean_est, var_est = hmc.sampling()
print("var_est: ", var_est, " mean_est: ", mean_est)

@junpenglao Am I on the right track?

1 Like

For the reference, this is part of the experiments we are currently doing to try out different potential backend in https://github.com/pymc-devs/pymc4_prototypes

The previous discussions could be seen in https://github.com/pymc-devs/pymc4_prototypes/pull/4

I see what you are trying to do here, although this is not quite what I have in mind…

So I would say that these experiments are mainly to see with the current infrastructure in different backends, how can we glue different elements together to do what we want (i.e., building model, performing inference), and then how can we turn those codes into stable and easy to use API.

Take the example of theano, there is no distribution class, so we actually need to write our own distributions. We would likely face similar difficulties in Mxnet.

However, for tensorflow and pytorch, since there is distribution already implemented, the experiment would mostly be how to build a valid model using distribution. To me, this is the major step, as I have no doubt that the HMC implementation could sample an energy function (logp in our case) had it written in tf or pytorch tensor. The difficulty point is to get logp and dlogp automatically.

If you look at my experiment in pyro, eg Cell 35, pyro can extract logp and dlogp if you wrap the random variable as a function (the model). Specifically, when you call hmc_kernel = HMC(chain_gaussian2, **hmc_params), the hmc_kernel contains field of logp etc.

So for the experiment, I would say the first aim/step is to try to extract logp automatically.

2 Likes

I have been trying to implement it.

tf.contrib.bayesflow.hmc.chain requires an argument called ‘target_log_prob_fn’. I am unable to decipher what this is supposed to be.

def target_log_prob(x):
  # Standard normal
  return tf.reduce_sum(-0.5 * tf.square(x))

using this is giving me wrong values.