I don’t quite understand your motivation but if you want to treat random variables as observables, there is no direct way to do it. One way people seem to have done it is using potentials:
I don’t know if there are any new methods to do this. For what you are asking, are you imagining a situation in which you are trying to do all of this on top of the old model?
with pm.Model() as model:
previous models priors (b,a,w) and likelihoods (P1, P2)
RV = sampled random observables from some distribution that depends on a
P1(Ng=RV | a,b,w) likelihood
If this is what you want to do, in my very limited Bayesian modelling experience, this looks like a very unexpected form for a model.
On the other hand if you want to sample from a distribution first and then use that “static” data to train your model, I imagine that wouldn’t be very different from your first model (modulo the details of construction that distribution)? Is the question how to construct Pmarginal?