# Propagation of uncertainty

Hello Pymc community

I have a doubt about the propagation of uncertainties.

Currently, I have estimated some parameters through a data regression, and thanks to bayes and pymc I have their mean, sd, and pdf

Now, I would like to combine these parameters with others through a different function than the one used in the data regression to see how well they might follow an experimental data set.

So, in this case, I don’t need to optimize the parameters, just to see how well these parameters follow the expected trend.

My question is about the best way to associate an uncertainty bar with the mean trend.

Instead of propagating the uncertainties in the classical form, what I think is more correct to do, since I have pdfs of the variables, is to randomly extract values from the various pdfs and average the results.

Is this correct? Is there any example to guide me?

Can you provide a bit more details of your scenario? I am unclear what you mean by “associate an uncertainty bar with the mean trend”. Is it just a matter of combining estimated parameter values in a new way (the “different function” you mention) to generate predictions about another data set?

A simplified example to clarify
I have a set of points (x,y) and a linear model, whose parameters I can estimate with a fit y=ax+b in pymc

In this way I find the pdfs for a and b

Next, I want to use these parameters, in another function
y’=a+cx’+e^bx’
to see the predictions with respect to a different dataset
Is there an example that can guide me?

Just use samples from the posterior of the parameters!

So after you fit your model in PyMC, write a loop that takes one sample from the trace for `a` and `b`, drops those samples into your next function for `y'`, and calculate to get a single output. Repeat that process to get samples from the pdf of `y'`.

You can get the same result by setting `y'` as a “deterministic” in your model, something like

``````pm.Deterministic("y_prime", a + c * x_prime + exp(b*x_prime))
``````

and then you’ll find `y_prime` in your trace or `idata` object with the other parameters you fit.

3 Likes

Sorry if it’s dumb, but I’m stuck
This is my code (more or less)

I’m not able to sample from the trace for a and b

``````with pm.Model() as a_model:

a = pm.Normal('a', mu=0.4, sigma=0.04)
b = pm.Normal('b', mu=0.9, sigma=0.2)
c = pm.Normal('c', mu=0.03, sigma=0.01)
e = pm.HalfCauchy('e',0.1)
u = func ( x, a, b, c)

y_pre = pm.Normal(‘y_pre',mu=u, sigma=e, observed=y_obs)
trace_a=pm.sample(2000, tune=500, random_seed=42)

b_m = float(trace_a.posterior["b"].mean())

with pm.Model() as b_model:

e2 = pm.HalfCauchy('e2',0.1)
f = pm.Normal('f', mu=0.3, sigma=0.1)
g = pm.Normal('g', mu=0.6, sigma=0.02)

u2 = func2 ( x2, b_m, f, g)

y_pre2 = pm.Normal(‘y_pre2',mu=u2, sigma=e2, observed=y_obs2)
trace_b=pm.sample(2000, tune=500, random_seed=42)

t_post_a=trace_a.posterior
t_post_b=trace_b.posterior

final = []
final = np.append(final,func3(x3,t_post_a["b"],t_post_a["a"],t_post_a["c"],t_post_b["f"],t_post_b["g"]))
``````

I think the easiest way is to use arviz’s `extract()` function:

``````samples = az.extract(idata, group="posterior", num_samples=10)
``````

Wouldn’t it make sense to run both of these models on the same MCMC chain, and pass the random variable `b` into `func2` directly? Just taking the mean discards the uncertainty about it.

Yeah, I wasn’t sure what’s going on with the multiple models and didn’t really take a close look honestly. Yes, taking means of (marginal) posteriors is a bad idea.