Posterior Predictive Check for Density Dist

How can I generate posterior predictive checks with a custom likelihood that is passed to Denisty Dist ? I am currently getting the following error:

ValueError: Distribution was not passed any random method. Define a custom random method and pass it as kwarg random

What is the custom random method ? Do you have any examples ?

The likelihood I am using comes from the exponential family, and it’s a simple univariate model. If I were to define it via the available exponential, along with an added term using pm.potential(), would this work for generating PPCs?

Not the answer you’re looking for, but check out the random method for some of the distributions defined in pymc3 and see if that helps.

Thanks Dan. I did look into this, but without deep knowledge of the code it’s hard to make sense of. It would be great if there were a way to declare custom distribution classes that inherit all the methods, as an alternative to the DensityDist interface.

Is there a protocol for proposing a new distribution to be added to PyMC3 ?

I found this in some code I wrote a while back, so I can’t vouch for it 100%, but maybe it will give you an idea. You can inherit from pm.Continuous and define a logp and random method.

import numpy as np
import pymc3 as pm
from pymc3.distributions.dist_math import bound
from pymc3.distributions.continuous import draw_values, generate_samples, assert_negative_support
import theano.tensor as T


class CensoredExponential(pm.Continuous):
    def __init__(self, lam, uncensored, *args, **kwargs):
        super(CensoredExponential, self).__init__(*args, **kwargs)
        self.lam = lam = T.as_tensor_variable(lam)
        self.uncensored = uncensored = T.as_tensor_variable(uncensored)
        self.mean = 1. / self.lam
        self.median = self.mean * T.log(2)
        self.mode = T.zeros_like(self.lam)

        self.variance = self.lam ** -2

        assert_negative_support(lam, 'lam', 'Exponential')

    def random(self, point=None, size=None, repeat=None):
        lam = draw_values([self.lam], point=point)[0]
        return generate_samples(np.random.exponential, scale=1. / lam,
                                dist_shape=self.shape,
                                size=size)

    def logp(self, value):
        lam = self.lam
        uncensored = self.uncensored
        return bound(uncensored * T.log(lam) - lam * value, value > 0, lam > 0)
1 Like

Thanks Dan, this gives me a good starting point.