Help refining priors

I’ve defined a model where I currently have my two priors defined as such:

alpha = pm.Normal(‘alpha’, mu = 0.0, sigma = 100)
beta = pm.Normal(‘beta’, mu = 0.0, sigma = 100)

Now, there are two things I know for certain about the underlying data and that is that alpha is positive and beta is negative.

Shouldn’t I then be able to improve my priors with this knowledge?

1 Like

Hi Mattias,
You certainly can, and I’m guessing you can also refine your priors on the standard deviations – right now, they are saying that 95% of the probability mass is expected between -200 and +200.
Depending on what’s best appropriate for your problem, I think an exponential or a gamma will be useful.

Chapter 5 of Richard McElreath’s Rethinking 2 has a very good portion about choice of priors – here is the port to PyMC3 :wink:
Hope this helps :vulcan_salute:

3 Likes

Hi Alex,

Had a look at gamma and it does seem to work for my alpha (intercept), but not sure if it’s right for my beta (slope) as it should be negative.

You can turn a positive prior over x into a negative prior by flipping the sign and creating a new deterministic variable, e.g. x_negative = pm.Deterministic('x_negative', -x)

3 Likes

Oh, but wouldn’t that also flip y_est?

I don’t think so. I suppose y = alpha + beta_negative * x

1 Like

Seems to be working quite well. Thanks a lot!

1 Like

@ckrapu could you expand on why it’s defined as a deterministic var?

You wouldn’t have to use a Deterministic variable, but it would probably enable you to parse the results more quickly. If you are interested in the value of beta_negative, then it might be nice to be able to rapidly extract that from the trace using trace['beta_negative'] rather than having to remember to do -1*trace['beta'] every time you want to look at the posterior summaries of \beta.

3 Likes

You’re welcome @mattiasthalen – welcome to Bayesian magical land :wink: :star_struck:

1 Like

Thanks, love the podcast btw!

1 Like

Ah thanks! Nice to meet listeners here :wink: