I was mostly trying to say things that might be useful, since “you did great” seemed boring!
You make a good point about interpretability, even in the presence of large data.
I was thinking about how, as your dataset grows, your posterior densities will collapse to the MLE pretty tightly (I couldn’t find a good reference for this, but would be interested if anyone has one!) Conversely, adding more parameters will (usually) admit more uncertainty over the parameters.
This is a pretty hand-wavy way of saying that bigger datasets can support more complex, expressive models.