Ahh, thank you very much for this clarification. I now see what the call signature of logp needs to look like and also get the rationale for separating observed data from model parameters in the call signature. I confess that I had come across the docstring, but somehow missed the meaning of the “value” argument there.
To rephrase for anyone else that comes along with a similar question: in the original question, I had tried to embed the observed data in the Op itself. It seems that recommended choice is to provide the data via the observed kwarg of DensityDist. This means that it is also necessary to update the input TensorTypes of the logp Op, with the first input to the Op now representing the observed data.
If it’s alright to ask a follow-up question here: is it poor decision choice/poor statistical practice to embed the data in the likelihood function? In my current project, I’ve chosen to do so because calculating my logp requires converting the raw data (numpy array) to a custom object, and this is somewhat costly to do. So if we can avoid having to do this construction at every time that the sampler calls to logp, that would be quite nice. AFAIK I wasn’t aware that aesara support tensor variables of an “object” type, so I just embedded the python object in the logp Op or function. Obviously this ties our hands from evaluating model parameters on newly-observed data, however. I’m still a relative novice to working with Bayesian models so I don’t have a good sense of how significant this limitation would be to e.g. model assessment.