So this is half a stats question, half a PyMC3 question. Let’s say I have data from 5 participants. And let’s say I have 3 models that I want to evaluate (although the actual numbers in my context are much higher).
Analysis 1
I might be quite interested in fitting each of the 3 models to each of the 5 participants entirely independently. That’s fine, we can do that, and use the WAIC/LOO model comparison to come up with the best model for each participant separately. This is fine.
Analysis 2
Now let’s say I want to know which is the best of my 3 models to account for all participants. The question is what is the best way to go about this?
Solution A
The first solution would be to create a new set of PyMC3 models which iterates over participants, still treating them as statistically independent. Then we could just do one model comparison and that will give us the answer. However, this is perhaps not a great solution because a) we have to write a new set of PyMC3 models, b) these models are now potentially very large if we have many participants and data samples per participant, c) the dimensionality of the posterior grows very large.
Solution B?
So… my thought was to combine the log likelihood values obtained from each individual ‘fit’ to each model x participant combination? So we pool the log likelihood values for all participants for a given model, obtaining a larger set of log likelihood values for each model… Then do one single WAIC/LOO model comparison on that?
- Is this statistically kosher? (I think it is)
- How might we go about doing this? My feeling is that it would require changes to the
pm.compare
function.
It could be that this is a bit niche, but I personally think that there would be a number of use cases for this and so could potentially merit looking in to.
Solution C
Or is it possible to simply sum some of the WAIC/LOO deviances across models after having calculated them for each participant/model combination as in Analysis 1?