Compute the KL divergence between two distributions

Thank you very much for this advice. We found the bug in the model and now it is working very nicely.

What we would like to do is compute the KL divergence between the prior and posterior model, but only for a subset of the variables. I think we almost have the answer, doing this:

logps = []
for point in trained_trace:
    point_logp = 0.0
    for variable in interesting_variables:
       point_logp += variable.logp(point)
    logps.append(point_logp)
return logp.mean()

our problem is defining interesting_variables – I tried the named_vars, removing the transformed variables, and then removing the ones that weren’t interesting. But this led to problems, because the model’s logp function delegates computation of some terms to the transformed variables. So I think the above loop needs something that removes the transformed variables, and from there delegates computation to the right transformed ones. Are those transformed ones only the ones for _log__? or also do we need to do something with the Bound ones?

Sorry – I accidentally hit the return too soon.