Need help drawing a RV from a custom prior function

Hi everyone,

The model I’m building at the moment requires that a random variable, r, is drawn from a modified exponential distribution, that looks like this:
image

where L is a free parameter.

I recently moved to PyMC3 from PyStan, where I’ve implemented this prior before. For those familiar with Stan, the way I set this up there was:

functions {
      def custom_lpdf(real r, real L){
             return log((1/(2*L^3)) * (r*r) * exp(-r/L));
      }
}
...
model {
     r ~ custom(L);
}

I’m struggling to find guidelines on how to implement this same feature in PyMC3. Does anybody have experience setting up custom prior functions in PyMC3 that could point me in the right direction?

Cheers!

In PyMC3, you usually would use pm.DensityDist for this:

def custom_lpdf(L):
    def conditional_logp(r):
        return tt.log((1./(2.*L**3)) * (r*r) * tt.exp(-r/L))
    return conditional_logp

with pm.Model() as m:
    L = pm.HalfNormal('L', 1., testval=.1)
    r = pm.DensityDist('r', custom_lpdf(L), 
                       transform=pm.distributions.transforms.log,
                       testval=.1)
    trace = pm.sample()

Note here

  • I use a transform, as the custom logp is a modified Exponential and I assume it only defined on the R+
  • I assign testval to .1, as otherwise the default value 0. is undefined for your distribution.

You can have look also at:
https://docs.pymc.io/Probability_Distributions.html#custom-distributions

1 Like

If I understand correctly the log-probability of your modified exponential distribution is

\log\circ p(r|L)=2\log(r) - \frac{r}{L} - 3\log(L) - \log(2).

So your code could be

import pymc3 as pm

def custom_lpdf(L):
    def lpdf(r):
        return (2*pm.math.log(r)) - (r/L) - (3*pm.math.log(L)) - pm.math.log(2)
    return lpdf

with pm.Model() as model:
    r = pm.DensityDist('r', custom_lpdf(2), testval=1)
    trace = pm.sample()

The drawn samples can then be found in trace['r']. But in my test, this produced a lot of divergent samples, which may introduce a bias in your results. You can fix that by telling the Hamilton-Monte-Carlo-Sampler to sample its values in a transformed space, where the derivatives are nicer, by writing

import pymc3 as pm

def custom_lpdf(L):
    def lpdf(r):
        return (2*pm.math.log(r)) - (r/L) - (3*pm.math.log(L)) - pm.math.log(2)
    return lpdf

with pm.Model() as model:
    r = pm.DensityDist('r', custom_lpdf(2), testval=1,
                       transform=pm.distributions.transforms.Log())
    trace = pm.sample()

If you want to read about divergent samples this may be a good starting point: https://docs.pymc.io/notebooks/Diagnosing_biased_Inference_with_Divergences.html

Edit: @junpenglao beat me to it^^

Edit2: Changed -2 to -pm.math.log(2) as @ojhall94 pointed out correctly. Note, that this constant should not matter and can probably be omitted.

3 Likes

I like your logp better as it operates in log space directly!

1 Like

Thank you. I used the \log of the given p(l|L). But this differs from what is specified in custom_lpdf:

I am not sure which log-probability @ojhall94 is really looking for.

1 Like

I think it is correct, as it is conditioned on L as well in the Stan code snippet.

1 Like

Hi both,

Huge thanks for the swift and clear response! I had bits and pieces of the puzzle but your examples have made it clear what I should be doing.

If I understand correctly the log-probability of your modified exponential distribution is

That’s right (although the 2 should be logged, I think, although as @Dominik points out it’s inclusion doesn’t matter), and what I do in practice.

I think it is correct, as it is conditioned on L as well in the Stan code snippet.

This is correct— it’s a prior on distances to observed star where L is the distance scale and r is the distances. In practice, I have L as a hyperparameter and draw the latent parameters r from the prior function p(r | L).

Using your examples my code now looks like this:

and works great! The distribution of r has a mode at 2*L, as it should. Thanks so much!

2 Likes