A distribution class must have the following structure:
class Foo(Discrete):
def __init__(self, ...):
# Set your distribution's parameters
...
super().__init__(...)
def random (self, point=None, size=None):
...
def logp(self, value):
...
random isn’t strictly necessary whereas logp is mandatory. random generates an array of samples from the distribution, you can look at the implementation of the Geometric or any other univariate distribution to see the common structure:
def random (self, point=None, size=None):
parameters = draw_values(self.parameters_list, point=point, size=size)
return generate_samples(your_rvs, parameters,...)
random is only used when sampling from the prior or the posterior predictive (the latter only happens if the observations are distributed following your custom distribution).
The logp method returns a theano tensor, or in other words, a symbolic expression that specifies how to compute the log likelihood that an observation comes from your distribution. This means that you don’t return a numerical value (no eval needed). The distribution’s logp is them wrapped by some other methods in the Model and in FreeRV, which effectively compile the tensor into a callable. These are then used under the hood to do SMC, HMC, metropolis or any other step method in the main sample function.