Sampling crashes with "The derivative of RV [whatever] is zero" when using a custom likelihood defined with pm.Potential

Seems that your logp implementation breaks the gradient:

import theano.tensor as tt
dgamma, dzeta, dxi = tt.grad(logl, [pm_gamma, pm_zeta, pm_xi])
inputdict = {pm_gamma: mle['x'][0], pm_zeta:mle['x'][1], pm_xi:mle['x'][2]}
print(logl.eval(inputdict), dgamma.eval(inputdict), dzeta.eval(inputdict), dxi.eval(inputdict))
# ==> -418.7280097633918 nan nan nan

It is likely the line

logls = pm.math.log(qxus ** changes) - (qxus * dts[:,None])