Thank you for all the hard work you have put into providing us with a powerful and flexible Bayesian inference framework in Python! Looking forward to seeing what future developments PyMC has in store for us!

I am trying to model as accurately as possible the process involved behind the signal measured with a light detector that I need to characterize. The number “Ne-” of photon-electrons generated each time the process happens follows a Poisson law, and each photo-electron then experiences a *different* multiplication gain given by a Gamma law. In other words, the shape of the Gamma distribution is not fixed in advance, but depends on Ne-. Pending the implementation of dynamic shaping, the trick presented here (Hierarchical changepoint detection - #5 by ckrapu), based on dynamic indexing, should address this first challenge.

Q1- Sorry, it is a redundant question … could you please confirm that at the moment, there are no other method to get around this dynamic shaping?

Then the analogue signal is digitized (~binned) by an ADC when the detector is being read. I was thinking of modelling this effect as follows:

1- Either directly using the “pymc.math.floor” function, provided that PyMC (via aePPL) is able to derive the logp for this operator, which seems to be the case according to (Transforms),

2- Or in the same vein as (Estimating parameters of a distribution from awkwardly binned data — PyMC example gallery), i.e. by segmenting the logpdf into several bins by differentiating the logcdf, then by matching the data to a multinomial distribution.

Q2- Would the first option, which I prefer for its simplicity, work?

Q3- I saw the use of the “rv_register” function in a video from Ricardo. Can this function be used in the same way as the “CustomDist” function to sample and constrain a model with observed data?

Q4- Is it possible to translate the PyTensor graph into JAX even if the sampler used is not in JAX? I don’t think I’ve seen an example with this scenario.

Q5- Is it possible to sample on more than one CPU core when there is a JAX jitted and vectorised function wrapped/encapsulated in a PyTensor Op in the model graph? This example (How to wrap a JAX function for use in PyMC — PyMC example gallery) suggests that it is not possible.

Q6- Out of curiosity, is PyMC still connected to the aeMCMC project, or to a similar project, promising to take PyMC to the next level including graph optimization of the combined/full {logp model+sampler} graph, automatic sampler selection, model reparameterization/marginalization, …?

(If I may say so, there is currently a great deal of developments (PyTensor, JAX, DaCe, Jul**, etc.) - initially stimulated in part by the rise of neural networks - around graph representation, high and low level optimizations, algebraic simplification, auto-differentiation, parallelization and JIT, and so on. Of course, it’s not PyMC’s role to provide an in-depth tutorial on these concepts. That said, it would probably be very useful, especially for someone unfamiliar with these concepts, to explain where PyTensor and PyMC stand in order to help users make an informed choice between these different tools, and also to make the most of PyMC. For example, unless I’m mistaken, the automatic derivation of the logp thanks to aePPL is a truly unique feature of PyMC that should perhaps be emphasized more strongly.)

Thank you for your help and your time.