DIC, WAIC, WBIC on regression tasks

Hi Marco,

  1. Both DIC and WAIC are related to out-of-sample/generalization prediction. I think this is a general good metric to evaluate models, even when you care more about the parameters than about the predictions. The general idea is that if your model and parameters are a good description of the underlaying phenomena or process that you are studying they should be able to predict unobserved (but observable) future data.

  2. If you get a warning you have a couple of options (besides ignoring them) to use other methods, like use LOO instead of WAIC (or vice versa), use K-fold cross validation, change your model and use one that is more robust. Of course to compare your models you can also add to the mix posterior predictive checks (although this in an in-sample analysis.) and background information.
    A little bit more about the warnings. Those are based on empirical observations. Is my opinion that we need more work on this, but as this point this the best thing we have. I have been thinking in adding tools to help diagnose or at least visualize the problematic points, thanks for reminding me about this! Notice that when using DIC/BPIC you always get a nice result without any warnings (even in assumptions are not met) and that could lead to overconfidence!

  3. DIC assumes the Posterior is Gaussian, the more you move away from this assumption the more misleading the values of DIC will be. Someone corrects if I am wrong, but is my understanding that hierarchical models tend to have non-gaussian Posteriors. Also WAIC is more Bayesian because you are averaging over the posterior distribution.

1 Like