I would like to compute the Kullback-Leibler divergence between two distributions in PyMC3. I *believe* that the `logp`

methods give me most of what I need, but I could use some help with the PyMC3 innards:

- How do I get the
`logp`

value for a sample (assignment of values to variables), as opposed to just a single var-value assignment? Is it as simple as invoking the`logp`

method on all the variables and then summing? Is there a way to use a`Deterministic`

random variable to get me the`logp`

?

I see this in the discussion of the `Model`

class:

logp_nojact

Theano scalar of log-probability of the model

logpt

Theano scalar of log-probability of the model

Iâ€™m not sure why there are two names with the same description, and if or how I could use one of these methods to get a value. Could I assign `model.logpt`

to a `Deterministic`

variable?

- Is there a way to transfer an assignment of values to variables from one model to another? That is, if I could compute the logp for samples in a trace generated from model P_1 along the lines described above, could I somehow transfer the samples from P_1 to P_2 and then get the logp relative to P_2?

Thanks! I suspect that the answer is blindingly obvious to anyone with a better understanding of PyMC3, but I donâ€™t see it myself.