Custom likelihood with inverse CDF

Trying to do inference in a simple model with custom likelihood function. The thing with this likelihood is that it contains inverse CDFs. I am not sure how to implement it in PyMC3.
Mock data:

Y = np.random.beta(3, 50, size=60)
invN_DR = stats.norm.ppf(Y)

Where stats comes from the scipy package.
Implementation of the inverse CDF function to be used in the likelihood.

@as_op(itypes=[tt.dscalar], otypes=[tt.dscalar])
def invN(value):
    return stats.norm.ppf(value)


def logp(rho, PD):
    return 0.5*Y.size*tt.log((1-rho)/rho) + 0.5*tt.sum(invN_DR**2-(tt.sqrt(1-rho)*invN_DR-invN(PD))**2/rho)


with pm.Model() as model:
    # Priors    
    rho = pm.Uniform('rho', -1, 1)
    PD = pm.Beta('PD', alpha=1, beta=1)
    # Likelihood
    pm.DensityDist('likelihood', logp, observed={'rho': rho, 'PD': PD})

I get a bunch of errors:

AttributeError: 'FromFunctionOp' object has no attribute 'grad'

During handling of the above exception, another exception occurred:
NotImplementedError: input nd

During handling of the above exception, another exception occurred:
NotImplementedError: input nd
Apply node that caused the error: InplaceDimShuffle{x}(FromFunctionOp{invN}.0)
Toposort index: 11
Inputs types: [TensorType(float64, scalar)]
Inputs shapes: [()]
Inputs strides: [()]
Inputs values: [0.0]
Outputs clients: [[Elemwise{Composite{((i0 * i1) - i2)}}(InplaceDimShuffle{x}.0, TensorConstant{[-1.938026...02390429]}, InplaceDimShuffle{x}.0)]]

Help is much appreciated.

I don’t think you can do that with the decorator @as_op since you need to implement the gradient (which is what Theano is complaining about). So you may need to implement your function as a full theano.Op class (see

The derivative of the inverse of the CDF is, quite nicely, 1/PDF; so it’s just a matter of filling in some python boilerplate.

Are you aware of any simple example for doing that, e.g. for a normal distribution?

Look at DoubleOp in the extending_theano link.

You should be able to replace x * 2 with norm.ppf(x) (in perform); and output_grads[0]*2 with output_grads[0]/norm.pdf(norm.ppf(input_grads[0])) (in grad).

1 Like

Thank you.
The problem with that DoubleOp is that it does not run without errors with their own testing example on the website. For that reason I don’t know how can I expect it to run properly after my changes… :confused: