Problem passing arguments with custom prior distribution

I’m fitting a simple model but using a custom prior distribution, so my model looks like:

def distprior3(r, rlen):
    r, rlen = r.value, rlen.value
    if r > 0:
        prior = (1/(2*rlen**3) * r**2 * np.exp(-r/rlen))
    else:
        prior = 0
    return prior

def singlestar_distance(w, wsd, star):
    with pm.Model() as model:

        # Hyperprior on length scale (in pc)
        tmp = pm.Uniform('tmp', 0.1, 4000)

        # Distance prior
        r = pm.DensityDist('r', distprior3,
                observed={'r': (1/(w*1e-3)), 'rlen': tmp})
        r = r.data['r'].value

        # The data likelihood function
        loglikelihood = pm.Normal('dist', mu=1/r, sd=(wsd*1e-3) )

        trace = pm.sample(sample=1000, tune=1000)

The problem is that when I try to pass the ‘tmp’ hyperprior to ‘r’, the model crashes with the error:

ValueError: Mass matrix contains zeros on the diagonal.

The derivative of RV tmp_interval__.ravel()[0] is zero.

When I just pass a number like,

        r = pm.DensityDist('r', distprior3,
                observed={'r': (1/(w*1e-3)), 'rlen': 1000})

It runs fine. Any suggestions of what might be going wrong?

You should not do this - you should pass the tensor directly to the function
Also, if you were doing r = distprior3((1/(w*1e-3)), tmp), that’s different than a DensityDist! The first one is a deterministic transformation, the second effects on the log_prob. You should use the DensityDist.

Thanks for the reply but I’m still not sure what I should be doing. For reference, I’m following the example under ‘Custom Distributions’ here (https://docs.pymc.io/Probability_Distributions.html).

When I try to pass the tensor directly to the prior, such that the code looks like this:

def distprior3(r, rlen):
    if r > 0:
        prior = (1/(2*rlen**3) * r**2 * np.exp(-r/rlen))
    else:
        prior = 0
return prior

def singlestar_distance(w, wsd, star):
    with pm.Model() as model:

    # Hyperprior on length scale (in pc)
    rlen = pm.Uniform('rlen', 0.1, 4000)

    # Distance prior
    dist = pm.DensityDist('dist', distprior3,
            observed={'r': (1/(w*1e-3)), 'rlen': 1000})
    
    # The data likelihood function
    loglikelihood = pm.Normal('dist', mu=1/r, sd=(wsd*1e-3) )

    trace = pm.sample(sample=1000, tune=1000)

The code does not run because the distprior3 function cannot handle the evaluation. Can you advise further?

try tt.switch, for more see http://deeplearning.net/software/theano/tutorial/conditions.html