Deterministic with observables changes the dimensions of the variables, why?

hi everyone
I’m starting to model some basic examples in pymc, but after adding the observed parameter to one of the distributions, it changes the outcome similar to the question here my example is the following one:

with pm.Model() as model:
    u1 = pm.Uniform("U_1", 0, 4) # prior 1 
    u2 = pm.Uniform("U_2", 0, 4) 
    det_1 = pm.Deterministic("delta", u1 -u2)
    idata = pm.sample(1000, tune=1000, cores=1)
az.summary(idata)

outputs

        mean     sd  hdi_3%  hdi_97%  mcse_mean  mcse_sd  ess_bulk  ess_tail  r_hat
U_1    2.012  1.138   0.203    3.929      0.028    0.021    1617.0    1049.0    1.0
U_2    2.030  1.181   0.216    3.990      0.030    0.022    1573.0    1158.0    1.0
delta -0.018  1.638  -3.104    2.947      0.041    0.036    1632.0    1398.0    1.0

which makes sense, I calculate the difference, a “deterministic” variable, based on the samples in both uniforms. Now I want to add some observations to the model for one of the uniforms

observed = [1,2,1,1,1,1,2,2,3,1,1,2,2,3]
with pm.Model() as model:
    u1 = pm.Uniform("U_1", 0, 4) # prior 1 
    u2 = pm.Uniform("U_2", 0, 4, observed=observed)
    det_1 = pm.Deterministic("delta", u1 -u2)
    idata = pm.sample(1000, tune=1000, cores=1)
az.summary(idata)

now it outpts

            mean     sd  hdi_3%  hdi_97%  mcse_mean  mcse_sd  ess_bulk  ess_tail  r_hat
U_1        2.007  1.131   0.154    3.856      0.039    0.027     822.0     888.0    1.0
delta[0]   1.007  1.131  -0.846    2.856      0.039    0.027     822.0     888.0    1.0
delta[1]   0.007  1.131  -1.846    1.856      0.039    0.027     822.0     888.0    1.0
delta[2]   1.007  1.131  -0.846    2.856      0.039    0.027     822.0     888.0    1.0
delta[3]   1.007  1.131  -0.846    2.856      0.039    0.027     822.0     888.0    1.0
delta[4]   1.007  1.131  -0.846    2.856      0.039    0.027     822.0     888.0    1.0
delta[5]   1.007  1.131  -0.846    2.856      0.039    0.027     822.0     888.0    1.0
delta[6]   0.007  1.131  -1.846    1.856      0.039    0.027     822.0     888.0    1.0
delta[7]   0.007  1.131  -1.846    1.856      0.039    0.027     822.0     888.0    1.0
delta[8]  -0.993  1.131  -2.846    0.856      0.039    0.028     822.0     888.0    1.0
delta[9]   1.007  1.131  -0.846    2.856      0.039    0.027     822.0     888.0    1.0
delta[10]  1.007  1.131  -0.846    2.856      0.039    0.027     822.0     888.0    1.0
delta[11]  0.007  1.131  -1.846    1.856      0.039    0.027     822.0     888.0    1.0
delta[12]  0.007  1.131  -1.846    1.856      0.039    0.027     822.0     888.0    1.0
delta[13] -0.993  1.131  -2.846    0.856      0.039    0.028     822.0     888.0    1.0

which I think is some kind of “delta posterior for each sample”(?). I don’t really understand why it is giving n_observed results, each delta[i] represents an i iteration after adding that observation to the posterior ? and how to compact the result in order to understand how the delta indicator will distribute after the observed data? why it gives you n_observed results?

Also, after adding the observed parameter into the U2 dist why does it disappear from the az.summary(idata)? how can I see what the posterior would look like for U2? doesn’t it just add more info to the model?

thanks

1 Like

When you pass observed, the shape of the data is used to inform the shape of the variable. So in your case it’s as if you passed also shape=len(observed), but to simplify users’ lives, PyMC does this automatically.

The only way you could observe multiple values is if you had multiple variables, and that’s exactly what PyMC does.

Second, when you pass observed you are telling PyMC that the values of those variables are fixed, so there’s nothing new to show on the posterior for u2. Those are not sampled.

Finally, the reason you see multiple values for the deterministic is due to broadcasting. It gives you i1 - observed, where observed is the constant vector of data and u1 is the variable being sampled at each iteration. It is broadcasted to the shape of observed and then the subtraction is done, just like 5 - np.asarray(observed) (for example) would.

1 Like

Hi @ricardoV94 Thanks for your fast response!, some following questions:

The only way you could observe multiple values is if you had multiple variables, and that’s exactly what PyMC does.

but what if I sample that “variable” (U2) several times ? (for example in an iterative survey) so I have n_observed samples overtime. how can I add that into pymc so the shape=1 ? (I tried to forced shape=1 but I got:

TypeError: Cannot convert Type TensorType(float64, (14,)) (of Variable U_2{[1. 2. 1. .. 2. 2. 3.]}) into Type TensorType(float64, (1,)). You can try to manually convert U_2{[1. 2. 1. .. 2. 2. 3.]} into a TensorType(float64, (1,)).

Second, when you pass observed you are telling PyMC that the values of those variables are fixed, so there’s nothing new to show on the posterior for u2 . Those are not sampled.

I see, but what if partially I receive some past results (from, for example, past experiments or surveys) still I want to know what the posterior of U2 would look like after having some observations, how can I do that?

Finally, the reason you see multiple values for the deterministic is due to broadcasting. It gives you i1 - observed , where observed is the constant vector of data and u1 is the variable being sampled at each iteration. It is broadcasted to the shape of observed and then the subtraction is done, just like 5 - np.asarray(observed) (for example) would.

got it, It makes sense.

In Bayesian settings there is no such thing as observing a variable multiple times. You can observe multiple realizations of the same process, but each realization is it’s own variable.

You have to model how each observation relates to the underlying process you care about. The simplest case is when you have observations with gaussian error around the mean of the process, perhaps with a known standard deviation, or one that must also be inferred.

In that case your model may look like:

observed = [1,2,1,1,1,1,2,2,3,1,1,2,2,3]
with pm.Model() as model:
    u1 = pm.Uniform("U_1", 0, 4) # prior 1 
    u2 = pm.Uniform("U_2", 0, 4)

    error_std = pm.HalfNormal("error_std")
    obs = pm.Normal("obs", mu=u2, sigma=error_std, observed=observed)

    det_1 = pm.Deterministic("delta", u1 -u2)
    idata = pm.sample(1000, tune=1000, cores=1)
az.summary(idata)

Thanks again for the response :slight_smile:

In Bayesian settings there is no such thing as observing a variable multiple times. You can observe multiple realizations of the same process, but each realization is it’s own variable.

Yes! That’s what I mean, I have “multiples realizations” and therefore “multiples observations” of the same process. How can I add those observations into the process?. I was following this example and as you can see they add a second variable to get simulations from the posterior.

Edit Note I found out that what I was looking for was to obtain a posterior prediction distribution mainly because I wanted to draw samples from the posterior in order to validate an observable variable (as in the challenger example). I use the method described in chapter 2, but, accordingly to the documentation seems like there is a method pm.sample_posterior_predictive that does that, is there a difference with adding a var compy in the model? not sure. This is the example based on the book:

observed = [1,2,1,1,1,1,2,2,3,1,1,2,2,3]
with pm.Model() as model:
    u1 = pm.Uniform("U_1", 0, 4) # prior 1 
    u2 = pm.Uniform("U_2", 0, 4)

    error_std = pm.HalfNormal("error_std")
    obs = pm.Normal("obs", mu=u2, sigma=error_std, observed=observed)
    obs_sim = pm.Normal("obs_sim", mu=u2, sigma=error_std) # this will give me a posterior sample to use to infer how obs posterior will look like 
    det_1 = pm.Deterministic("delta", u1 -u2)
    idata = pm.sample(1000, tune=1000, cores=1)
az.summary(idata)

[In that code obs_sim will get samples from the posterior of obs (? not sure here), and we can use that posterior to infer hdi and so on.]

and the code using the docs should be something like this:

observed = [1,2,1,1,1,1,2,2,3,1,1,2,2,3]
with pm.Model() as model:
    u1 = pm.Uniform("U_1", 0, 4) # prior 1 
    u2 = pm.Uniform("U_2", 0, 4)

    error_std = pm.HalfNormal("error_std")
    obs = pm.Normal("obs", mu=u2, sigma=error_std, observed=observed)
    det_1 = pm.Deterministic("delta", u1 -u2)
    idata = pm.sample(1000, tune=1000, cores=1)
    obs_sim = pm.sample_posterior_predictive(idata, var_names=["U_2","obs"])
az.summary(idata)
1 Like

Yes, posterior predictive is what you want. It is more efficient and does not disturb your model posterior. Otherwise yourobs_sim is adding an additional prior term to your model that you probably don’t want.

1 Like