I have a doubt about the propagation of uncertainties.
Currently, I have estimated some parameters through a data regression, and thanks to bayes and pymc I have their mean, sd, and pdf
Now, I would like to combine these parameters with others through a different function than the one used in the data regression to see how well they might follow an experimental data set.
So, in this case, I don’t need to optimize the parameters, just to see how well these parameters follow the expected trend.
My question is about the best way to associate an uncertainty bar with the mean trend.
Instead of propagating the uncertainties in the classical form, what I think is more correct to do, since I have pdfs of the variables, is to randomly extract values from the various pdfs and average the results.
Is this correct? Is there any example to guide me?
Can you provide a bit more details of your scenario? I am unclear what you mean by “associate an uncertainty bar with the mean trend”. Is it just a matter of combining estimated parameter values in a new way (the “different function” you mention) to generate predictions about another data set?
A simplified example to clarify
I have a set of points (x,y) and a linear model, whose parameters I can estimate with a fit y=ax+b in pymc
In this way I find the pdfs for a and b
Next, I want to use these parameters, in another function
y’=a+cx’+e^bx’
to see the predictions with respect to a different dataset
Is there an example that can guide me?
Just use samples from the posterior of the parameters!
So after you fit your model in PyMC, write a loop that takes one sample from the trace for a and b, drops those samples into your next function for y', and calculate to get a single output. Repeat that process to get samples from the pdf of y'.
You can get the same result by setting y' as a “deterministic” in your model, something like
pm.Deterministic("y_prime", a + c * x_prime + exp(b*x_prime))
and then you’ll find y_prime in your trace or idata object with the other parameters you fit.
Wouldn’t it make sense to run both of these models on the same MCMC chain, and pass the random variable b into func2 directly? Just taking the mean discards the uncertainty about it.
Yeah, I wasn’t sure what’s going on with the multiple models and didn’t really take a close look honestly. Yes, taking means of (marginal) posteriors is a bad idea.