I have a model with a random variable that is sampled from uniform prior which has an entropy potential applied in order to transform the potential. Please see code below.
var_alpha = theano.shared(value = 1.5, borrow = False)
phi_e = pm.Uniform(‘phi_e’, lower = lb_phi_e, upper = ub_phi_e, shape = (ub_phi_e.size))
S1_phi_e = (phi_e/phi_e.sum()*np.log(phi_e/phi_e.sum())).sum()
I’m interested in sampling the prior distribution to verify certain things about the model however, the prior_predictive_sample function or model.phi_e.random() all seem to return uniformly distributed samples. I would expect the potential to be applied …
If it’s not applied (i suspect this is the case from some other posts on here), how can I apply it to the random variables?
Sample prior predictive relies on the distribution’s
random method only. This means that it ignores potentials, like what you defined, because they only affect the model’s
logp. To draw samples that are distributed correctly according to your model’s potentials, you have to use a different algorithm, like MCMC, rejection sampling of SMC.
Luckily, pymc3 can help you with that. What you have to do is:
- Write your model but don’t assign observations to any variable.
pm.sample to generate samples using MCMC (NUTS would be used by default in your short example) or SMC (which could also work as a form of rejection sampling).
As with any use of MCMC, you will have to evaluate the convergence of the chains. Check the effective sample size, the rhat, the dispersion, the divergences, the energy landscape (these two will only apply to HMC methods like NUTS), and any other check that suits your needs.
What would be really ideal would be if you could re write your potentials to introduce them into a custom distribution, so that you also modify the generated samples. I know that this can’t be done for many relevant models, like conditional random fields, so don’t worry much about it.