Compute the KL divergence between two distributions

Here’s some code we were using to try to figure this out:

def display_factored_point_logp(m: SimpleModel,
                                point: Dict[str, np.ndarray]) -> None:
        '''Display the by-variable factors of the logp function of point.'''
        with m.model:
            good_varnames = filtered_vars(m)
            for (name, RV) in m.named_vars.items():
                if name in good_varnames:
                    try:
                            lp = RV.logp(point)
                            print(name, lp)
                    except AttributeError:
                            print('{0} has no logp value.'.format(name))
        return

The problem is that we would get a lot of these “has no logp value” errors. I think the right way to handle these is to find a transformed variable for the logp, but I’m not sure how to do it.