I’m trying to achieve a similar thing in the context of a Gamma GLM.
I’ve followed this topic to create a potential which I can weight.
I’ve had an idea about how to sample from the posterior predictive but I’m not convinced it’s correct: create a second model, replacing the potential with the Gamma function and use that only to sample from the posterior predictive.
For example, if this is the model I’m using for sampling:
# x = regressors
# y = target
# w = weights
with pm.Model() as m1:
α = pm.Normal("α", 0, 1)
β = pm.Normal("β", 0, 1)
shape = pm.Uniform("shape", 0, 100)
# Log-link function
μ = at.exp(α + β*x)
logp = w * pm.logp(pm.Gamma.dist(alpha=shape, beta=shape/μ), y)
potential = pm.Potential("potential", logp)
trace = pm.sample()
and then use this one for prediction:
with pm.Model() as m2:
α = pm.Normal("α", 0, 1)
β = pm.Normal("β", 0, 1)
shape = pm.Uniform("shape", 0, 100)
# Log-link function
μ = at.exp(α + β*x)
pm.Gamma(alpha=shape, beta=shape/μ, observed=y)
with m2:
trace.extend(pm.sample_posterior_predictive(trace))
Would that be a correct way? I’m aware of it that the weights don’t come into account in the posterior predictive, but that makes sense - I’m giving more weight to certain observations in the fitting stage, but not in the prediction stage.