I am usually a Stan user but am writing a bunch of jax functionality lately and am migrating over to using blackjax. I am still learning the ins and outs of pymc and had a basic question:
How can I replicate the following (contrived) Stan functionality:
theta ~ normal(0,10);
theta ~ normal(2,4);
that is, replicate this post for the prior on theta?
You can use pm.Potential to add arbitrary logp terms to the graph. We discourage that a bit because you lose the 1-to-1 representation between the random and the logp graphs and can’t guarantee correct results when using prior or posterior predictive sampling.
Anyway the Stan model API can be mimicked by something like:
For the likelihood, the use case would be (for example) if we had 3 time series: t1, t2 and t3.
Then we would have coefficients on each: theta1, theta2 and theta3. We want to fit each to their subsequent data, but we also know that theta`` and theta2havetheta3` as a prior.
A non central parametrization would just be pm.logp(pm.Normal.dist(0, 1), theta) And then you separately define a new theta as new_theta = theta * 2 + 4 and use that wherever you want.
.dist() is not a draw (at least in this context). It’s just an object representing a specific distribution. As an input to logp, it specifies what density should be returned.
Having shown all this in the thread I would still suggest trying to use PyMC as it’s intended, that is specifying a random generating graph and not thinking about the density side of it
Thanks for the help. So as an applied example pf this: I am building a generative model of retail sales and would like that “medium brown shirts sell like brown things AND medium shirts”.
Hence the prior sharing.
Are you suggesting that this type of model does not fit into the pymc usage as it is intended?