I am trying to parameterize a model in the following way:
I have data that are from a series of experiments. The experiments should all give a consistent metric result (i.e. one of the parameters should be unimodal). But some of the experiments fail, in which case the data give no information about the result.
For each of the experiments, I want to average two models; a ‘passed experiment’ model, and a ‘failed experiment’ model. Both of these are different geometrical fits to the data, and the only parameter they share is the predicted data. For the ‘failed experiment’ model, the parameter of interest from the ‘passed experiment’ model is assigned a prior distribution, but the data don’t inform it at all.
Some of the experiments are ambiguous as to whether they passed or failed (either model could conceivably apply). I want to perform model averaging for each experiment. I then want to have the result from each experiment inform a top level parameter, so that they have common mean. Effectively I am weighting each experiment in a hierarchical model by how well they conform to the ‘pass’ model vs the ‘fail’ model.
Is there any way to parameterize a model like this (with the model comparison done at the individual level rather than the population level) in PyMC3? None of the examples I have seen currently do this.