Sampling crashes with "The derivative of RV [whatever] is zero" when using a custom likelihood defined with pm.Potential

Hmmm, usually not having gradient is not a deal breaker (e.g., see comment in https://github.com/tensorflow/probability/pull/137#issuecomment-416427517), but in this case it certainly does not help the debugging.

[edit:] not having gradient for all RVs is indeed a deal breaker.

[edit 2]:
OK, so I figure out a version that works:

def logp(gamma, zeta, xi):
    qxus = ((is_s * n_infected[:, None] * gamma) +
            (is_l * zeta) + (is_i * xi))
    x = tt.zeros(qxus.shape)
    x = tt.inc_subtensor(x[changes], qxus[changes])
    x = tt.inc_subtensor(x[~changes], 1.)

    logls = pm.math.log(x) - (qxus * dts[:, None])
    return pm.math.sum(logls)

with pm.Model() as the_model:
    gamma = pm.HalfNormal("gamma", sd=1)
    zeta = pm.HalfNormal("zeta", sd=1)
    xi = pm.HalfNormal("xi", sd=1)

    potential = pm.DensityDist("potential",
                               logp,
                               observed={"gamma": gamma,
                                         "zeta": zeta,
                                         "xi": xi})

For full debugging note see:

2 Likes