Compute Shannon Entropy with pymc4?

Hello everyone,

apologies if this is a silly question.
I am not very knowledgeable in information theory, but I would like to compute the Shannon entropy of the posterior distribution that pymc computes. There are what to do using pymc?

Maybe this argument has been already discussed somewhere, but from a search, I haven’t find anything. Apologies if this is repetition and thanks for the help.

1 Like

The idata that pm.sample returns has the log probability of each observation under the posterior distribution, so you can use that to apply the definition of Shannon Entropy. Assuming you extract the log probably as logp, the entropy will be -(np.exp(logp) * logp).sum()

I’ve seen it computed in base 2 sometimes; if you want that you’ll have to do a change of log base.

4 Likes

@jessegrabowski Thanks a lot!