Compute the KL divergence between two distributions

If we want to do this variable-wise as in my example code (sorry for messing it up), I think we need to do something like check and see if variable.logp has a transformed variable and, if so, get the logp from that variable instead.

Sorry if this is a dumb question, but I’m having trouble finding the real logp code, because there are the multiple layers of optimization needed to make it work efficiently.