Model profiling: how-to and configuration

Hi all,

I haven’t done model profiling in pymc before and was at first stumped when trying to follow the docs. The current notebook instructs us to use model.profile(model.logpt). But doing so (and also if replacing logpt with logp) fails with AttributeError: 'function' object has no attribute 'owner'.

Looking into the tests in v4, it looks like the API has changed somewhat, and we should call logp() to pass the tensorvariable to profile. My first question is: is it correct to run profiling as follows: model.profile(model.logp())?

Hopefully so :slight_smile:. This leads to my second question, which may be a little bit more of an edge case. Does anyone have recommendations on how to profile a model for many (possibly multidimensional) inputs? I ask because I’m working with a model whose logp() performance can vary depending on where in the parameter space the input point comes from. It would be great to e.g. profile the model where each input point is randomly sampled (such as sampled from the priors, or some arbitrary function) in order to better-measure the model’s average performance. It seems like right now though, pm.sample() only works with one fixed point.

Many thanks –

1 Like

Hi @covertg, where you able to sort out your profiling issue? I’m trying to make sense of aesara/pymc profiling API over here, based on the aesara profiling tutorial but to no avail. If you were able to get it to work, do you have a code example? Any help you can provide would be appreciated! Much obliged.

1 Like

I have a notebook showing profiles for aesara function as well as PyMC logp and dlogp functions, perhaps it might be helpful?


Great, thank you @jessegrabowski! It’s helpful to have more examples like this. I’ll shoot you a message if I have any questions. :pray:

1 Like