I haven’t done model profiling in pymc before and was at first stumped when trying to follow the docs. The current notebook instructs us to use
model.profile(model.logpt). But doing so (and also if replacing logpt with logp) fails with
AttributeError: 'function' object has no attribute 'owner'.
Looking into the tests in v4, it looks like the API has changed somewhat, and we should call logp() to pass the tensorvariable to profile. My first question is: is it correct to run profiling as follows:
Hopefully so . This leads to my second question, which may be a little bit more of an edge case. Does anyone have recommendations on how to profile a model for many (possibly multidimensional) inputs? I ask because I’m working with a model whose logp() performance can vary depending on where in the parameter space the input point comes from. It would be great to e.g. profile the model where each input point is randomly sampled (such as sampled from the priors, or some arbitrary function) in order to better-measure the model’s average performance. It seems like right now though, pm.sample() only works with one fixed point.
Many thanks –