Prior boundaries that are not constant but random variables

I know one can use the Bound method to impost constant constraints on the priors. What I need to do is to impost a constraint relating the prior random variables. I am wondering is this possible?

I used to do it this way:

with pm.Model():
    bound = pm.Uniform('b',-1,0)
    mu = pm.Normal('mu',0.,1.)
    b_mu = pm.Deterministic('b_mu', tt.switch(tt.le(mu, bound), bound, mu))
    trace = pm.sample()

plt.scatter(trace['bound'], trace['b_mu'])

where b_mu would be constrained by bound. However, I am not completely sure doing so is correct as it makes the prior improper. Maybe an additional constraint with pm.Potential is needed.

It is working!

I am not an expert in Bayesian but I have read a recent paper (https://arxiv.org/abs/1712.03549) that if the prior is improper, sometimes the MCMC sampling can still behave as if it is proper (it shouldn’t). Is there a way to check using PyMC3? I think PPC could not detect this?

Thanks a lot!

What I am concerned about is subtle bias. However, I do not know the precise detail of how it will affect the estimation etc. I guess it depends on how the bounded prior is implemented in the model. Maybe the mathematicians can weight in on this @aseyboldt, @colcarroll, @AustinRochford, @fonnesbeck etc.

So that shouldn’t be an improper distribution. You can write down the integral if you’re patient, or you can point out that your example is equivalent to

with pm.Model():
    bound = pm.Uniform('b',-1,0)
    mu = pm.Normal('mu',0.,1.)
    b_mu = pm.Deterministic('b_mu', tt.max([bound, mu]))
    trace = pm.sample(10000)

and so the pdf of b_mu is bounded above by the sum of bound and mu, so ∫ b_mu < ∫ bound + mu = 2 < ∞.

The same argument should hold for the min/max of any (proper) RVs!

Also, that’s a cool pdf!

1 Like

It is reassuring to know that what I have been doing is not wrong :grin: - I always thought of this kind of a hack