Sensitivity analysis after Bayesian calibration with PyMC3

Hi everyone,

I am a PhD student and I am working on creating a simple Bayesian calibration workflow for the open-source global glacier model OGGM (using PyMC3). I am wondering how one can assess the influence of the prior assumptions and observation uncertainties on the total uncertainty represented by the posterior distributions after the calibration

To understand a bit better what I mean, I will explain a bit more the workflow:
(*if you have a general answer to my question above, and don’t want to go into the details you can also skip this :smile: *)

  • At the moment, I have two free parameters (melt and precipitation factor) to calibrate each glacier to two observations. The prior distributions of the free parameters that you can see on the Figure (a, c) were found by a pre-calibration by using a set of reference glaciers and additional info. The observations for each glacier have their own uncertainties (Fig. b). So we get this kind of posterior distribution for the parameters (Fig. d). Because of the poor observation data, we have an equifinality problem. If we would not use constrained priors and just one observation, we would find infinite combinations that fit perfectly to our observation (as the two parameters are correlated). However, even though two combinations might result in the calibration period to the same value (in the mean), they might still result in different glacier evolutions in the future. So it might be important to not just fix one of the two parameters to an arbitrary value.

  • After the calibration (with the NUTs sampler), I take 200 random draws of the two parameters (Fig. d) and compute with them projections of glacier evolution for each glacier (I can’t use more draws because there are too many glaciers, >200.000 ). With that I hope to represent the uncertainties from the calibration into the projections (from both observation and equifinality). This works so far at least in a test region.

  • What I would like to analyse now is the following: From which sources do these uncertainties come from? What is the influence of the different free parameters and prior choices compared to the observation uncertainties. So, I would like to answer, whether the total uncertainties are mainly the result of the observation uncertainties or if the equifinality problem plays also a role.

I thought that a Global Sensitivity Analysis after the Bayesian calibration (on the posterior distribution) could give me some quantitative estimates. I could not find any details except a short mentioning in a book of Saltelli (2004, Ch. 6.5). Does anyone know about any example code that uses PyMC3 and analyses the influence of different calibration assumptions and observation uncertainties on the posterior distributions. It does not need to be a GSA, maybe there is a much simpler way to do this in my case :wink:. A possibility would be to repeat the calibration several times by 1) ignoring observation uncertainties, 2) fixing one parameter beforehand … and to compare then the results & uncertainties between each other. But I am sure there is a better way to do this!

Thanks a lot and sorry for the long text,

1 Like