It seems there is no built-in way to count likelihood (and gradient) evaluations of a custom likelihood function.
@nouiz recommended a work-around wrapper in Theano:
all_counters =  def logpdf(x): s = theano.shared(0, name='function_calls') all_counters.append(s) s.default_update = s + 1 ll = actual_custom_likelihood(x) return ll + s * 0 pm.DensityDist('x', logpdf, shape=shape_of_x) trace = pm.sample()
With metropolis this seems to work as expected. The shared variable s always seems to have 100 after 100 MCMC iterations. For NUTS there are more evaluations, but no time compute time penalty which seems odd. It also does not distinguish between regular evaluations and gradient evaluations.
Is there are more direct way to do this in pymc3? Is there a way to count evaluations and gradients separately?