KL-Divergence between prior and posterior

I would like to compute the Kl Divergence between a prior and marginal posterior to get a sense of the amount of information that the prior is providing for the updated posterior. My particular problem is that I must have a “weakly identifiable” hyperparameter, or put another way, a posterior geometry with a long flat ridge of some sort. I saw this post in discourse that attempted. Is there a pre-built way to do this in pymc3? Any recommendations?

Hi. I was the one who asked that earlier question, and I ended up doing what @junpenglao advised. Does this not work for you?

Hi! Thanks for responding. Was this the code you used? How did you alter it to get the final solution working?

def display_factored_point_logp(m: SimpleModel,
                                point: Dict[str, np.ndarray]) -> None:
        '''Display the by-variable factors of the logp function of point.'''
        with m.model:
            good_varnames = filtered_vars(m)
            for (name, RV) in m.named_vars.items():
                if name in good_varnames:
                    try:
                            lp = RV.logp(point)
                            print(name, lp)
                    except AttributeError:
                            print('{0} has no logp value.'.format(name))
        return

This was the function I was using to try to figure out what was going on in a pathological case (where there was a NaN or infinite logp).
I’ll try to write the logp computation and post it here and we can both experiment with it.

That discussion meandered a bit, because I was having trouble computing the logp because of very small terms, which was, in turn, because the distribution was pathological, and then the discussion moved on to inspecting the logp to find the pathology in the distribution.