Multiple priors on the same parameter?

I am usually a Stan user but am writing a bunch of jax functionality lately and am migrating over to using blackjax. I am still learning the ins and outs of pymc and had a basic question:

How can I replicate the following (contrived) Stan functionality:

theta ~ normal(0,10);
theta ~ normal(2,4);

that is, replicate this post for the prior on theta?

Similarly:

theta2 ~ normal(-1,5);
target += binomial_logit_lpmf(n1 | N1, theta);
target += binomial_logit_lpmf(n2 | N2, theta2);

fit multiple parameters for different data sources for the likelihood?.

Thanks!

You can use pm.Potential to add arbitrary logp terms to the graph. We discourage that a bit because you lose the 1-to-1 representation between the random and the logp graphs and can’t guarantee correct results when using prior or posterior predictive sampling.

Anyway the Stan model API can be mimicked by something like:

theta = pm.Flat("theta")
pm.Potential("term1", pm.logp(pm.Normal.dist(2, 4), theta))
pm.Potential("term2", pm.logp(pm.Normal.dist(0, 10), theta))

For the likelihood, if I understand correctly you could specify two observed variables with the same data?

Okay, would this also work:

theta_prior = pm.Normal('theta_prior',mu=0,sd=10)
theta = pm.Normal('theta',mu=theta_prior,sd=4,observed=2)

for the priors?

For the likelihood, the use case would be (for example) if we had 3 time series: t1, t2 and t3.

Then we would have coefficients on each: theta1, theta2 and theta3. We want to fit each to their subsequent data, but we also know that theta`` and theta2havetheta3` as a prior.

So we would fit (in Stan):

target += t1 (fitting theta1)
target += t2 (fitting theta2)
target += t3 (fitting theta3)

and the prior has

theta1 ~ normal(theta3,1)
theta2 ~ normal(theta2,1)

or something similar.

So the likelihoood is fitting mutliple things simultaneously.

Maybe but feels rather opaque about the intention

Sounds like you have 3 likelihoods which is fine in a model. But perhaps I’m missing something

1 Like

okay yeah now i understand about the likelihood(s). I was overthinking it.

Okay the below works:

theta = pm.Flat("theta")
pm.Potential("term1", pm.logp(pm.Normal.dist(2, 4), theta))
pm.Potential("term2", pm.logp(pm.Normal.dist(0, 10), theta))

now, what if we wanted to do a non-central parameterization? logp doesn’t accept additions so the following cannot be done:

theta = pm.Flat("theta")
pm.Potential("term1", pm.logp(2 + 4*pm.Normal.dist(0,1), theta))
pm.Potential("term2", pm.logp(10*pm.Normal.dist(0, 1), theta))

is there a way around this?

Okay, since pm.Normal.dist() is a draw from the normal distribution, do we not need to ever consider reparameterization in this case?

A non central parametrization would just be pm.logp(pm.Normal.dist(0, 1), theta) And then you separately define a new theta as new_theta = theta * 2 + 4 and use that wherever you want.

.dist() is not a draw (at least in this context). It’s just an object representing a specific distribution. As an input to logp, it specifies what density should be returned.

Having shown all this in the thread I would still suggest trying to use PyMC as it’s intended, that is specifying a random generating graph and not thinking about the density side of it

1 Like

Thanks for the help. So as an applied example pf this: I am building a generative model of retail sales and would like that “medium brown shirts sell like brown things AND medium shirts”.

Hence the prior sharing.

Are you suggesting that this type of model does not fit into the pymc usage as it is intended?

I’m not saying that. Instead I would ask whether you can define a generative random model of the sales. That would be your PyMC model

1 Like