Implementing rounding (by manual integration) more efficiently

For benchmarking/debugging though you need the “raw” function, which is saved in the .f attribute.

Thanks for pointing that out. It didn’t change the outcome however, dlogp calls are still ~3500 times slower with my custom distribution.

One other note about timing, if you are compiling to a non-standard backend (like jax or numba) make sure you time the jitted function.

I tried this but I cannot compile my custom distributions dlogp. I get NotImplementedError: Dispatch not implemented for Scalar Op gammainc_grad_b. Sampling works fine though and I don’t get errors. Does nuts_sampler="blackjax" use something else for calculating the gradients or are there fallbacks in the case of missing implementations?

Another useful thing to do is to enable the profiler, … , then look at f.profile.summary().

Is there a way to get more fine grained info than in the image below? If I understood correctly (by comparing to the appended graph.txt), the profiling currently tells me that almost all of the time is spent calculating the likelihoods and not much else.


graph.txt (8.8 KB)

Variable transformations are only for the logp graph, so I’m not sure what you mean.

Sorry for the confusion, I’m kinda confused myself. I guess what I tried to ask is that I’m using the custom distribution in my likelihood and not in priors and I don’t quite understand what you meant by “manually specify a transformation to a pm.CustomDist”.

I’m using mean-variance parametrization for my underlying gamma distribution and I have defined normal and half-normal priors for mu and sigma, respectively. mu is not constrained and does not get transformed. sigma is transformed to sigma_log__ as it should. Are you talking about some other transformations besides these or am I completely off the tracks? Should there be other transformations in addition to the obvious posterior → log-posterior transformation?