Multiple priors in regression model

Hi PyMC3 team and community! :slight_smile:

I am currently working on the Bayesian parameter estimation using PyMC3. Unfortunately, I am stuck and confused on how I can make use of different priors in a vectorized regression model.

Following, I’ll shortly describe my experiences with PyMC3 and what I was manage to do so far, before I’ll describe where I am stuck.

As mentioned in other channels, I was able to estimate parameters in a vectorized, linear regression model in the following way:
I have an matrix A with N rows (equal to the number of observations) and M columns (equal to the number of parameters, and the observations itself as a vector d with N rows as well and a single column.
So far so good, I was able to create my “vectorized priors” for beta using the ‘shape’-argument shape=(M,1) and then simply use the dot operation for the regression:

d_est = pm.math.dot(A, beta)
l = pm.Normal(‘d’, mu=d_est, sigma=e, observed=d)
trace = pm.sample(10000, step=step, start=start, progressbar=True, cores=1, chains=4)

This works like a charm and I am happy with the results. The results are very close to the results from Matlab.

Now here comes where I stuck:

So far, I’ve been only able to put the same prior on all parameters beta, as I am using the shape-argument. In the future, I’d like to put different priors on different parameters. Here is what I came up with so far:

e = pm.InverseGamma(‘e’, alpha=3, beta=1)
p[1] = pm.Normal(‘beta1’, mu=1.0, sigma=1.0, shape=(1,1));
p[2] = pm.InverseGamma(‘beta2’, alpha=3, beta=1, shape=(1,1));
p[3] = pm.Normal(‘beta3’, mu=1.0, sigma=1.0, shape=(1,1));
beta = tt.concatenate§
d_est = pm.math.dot(A, beta)
l = pm.Normal(‘d’, mu=d_est, sigma=e, observed=d)

The code runs, but the estimated parameters are all very close to 0, whereas the results from Matlab with simple normal priors look fine.

My guess is, that I am messing something up with the concatenate-function or the dot-function there, but I have now clue to identify the isue any further, as I am not familiar with Theano at all.

I am gradteful for any support on how I can “concatenate”/“stack” different priors on the parameters on each other to make this working :slight_smile:

Thank you very much in advance!

Best from Germany,

Max


PS: PyMC3 is a great library, btw and the tutorials make it easy to dive into probabilistic programming!

Small tip: you can wrap your code block in ``` to make it show nicely.

I am not sure what you are doing with the p[1]… p[3]. Also the tt.concatenate looks malformatted, so I cannot tell whether you are doing something wrong there. In theory it should be fine to have different priors, concatenate them, and then do the dot product, as you did.

For instance you should have the same results if you do:

beta = pm.Normal('beta', mu=1, sigma=1, shape=3)
d_est = pm.math.dot(A, beta)  # you might need to transpose something here

or:

beta1 = pm.Normal('beta1', mu=1, sigma=1)
beta2 = pm.Normal('beta2', mu=1, sigma=1)
beta3 = pm.Normal('beta3', mu=1, sigma=1)
beta = tt.concatenate([beta1, beta2, beta3])  # or tt.stack (I never remember)
d_est = pm.math.dot(A, beta)  # again you might need to transpose something

If you don’t, you probably are doing something wrong in the concatenation part

1 Like

Thank you very much for the super quick reply! :slight_smile:
Ok, then I am glad that I am on the right track, I’ll try to locate my issue there.

Thank you!!