Hi all,
This is perhaps a non-standard question. I am building a model, and there is an unknown parameter Z. I have placed a prior over Z \sim N(0, 1). The model has several other parameters, \theta.
Usually, inference would compute p(Z, \theta | \mathcal{D}). However, even though statistically \mathcal{D} might provide evidence about the value of Z, I want to force the posterior to be the same as the prior, or alternatively, I want to compute p(\theta | \mathcal{D}, Z \sim N(0, 1)). The reason that I want to do this is that the model is mis-specified, and so I don’t think the evidence should be used to choose Z.
If I build my model as usual, the posterior on Z will not be equal to the prior because \mathcal{D} does provide evidence. The most naive way of doing this would be to sample from p(Z), compute p(\theta | \mathcal{D}, Z) and average them. This would require many runs of NUTS, and so I was wondering if there was a way to do this using PyMC3?
I think this amounts to somehow removing Z from the computational graph, but I’m not sure how to do this,
Alternatively:
I set p(Z | \mathcal{D}) = p(Z) i.e., I want to discard the evidence that \mathcal{D} provides about Z. Therefore,
which can be compute by sampling Z from the prior, performing NUTS sampling for that value, and then averaging this across a bunch of runs. I just want to do this by doing NUTS sampling once.
Thanks!