11.16 Rethinking Code

Hi Thad,
Thanks for getting in touch; it’s nice to see this material is useful!

As Junpeng perfectly summed it up, here we’re simulating the fact of redoing the experience with the same 7 chimpanzees, across the same 4 treatments – only this time, each chimp gets each treatment only once, and not multiples times as in the original data that served to fit the model.

In essence, we’re simulating the data you see in Code 11.15: if we tested the 7 chimps with the 4 treatments again, which proportions of pulled_left are we expecting to see, given our model and assumptions? And then we compare those predicted proportions to the empirical data of Code 11.15, to get a sense of the model’s performance – that’s what’s meant by posterior predictive checks.
Actually, I even prefer the term posterior retrodictive checks, as we’re trying to see how good the model is at spitting out the data it was fed, with uncertainty estimation around these generated data because we’re Bayesians :wink:

Hope this helps, and enjoy the notebooks :vulcan_salute:

3 Likes