Aha well, that workflow doesn’t sound unreasonable. Have you read this recent treatise from Gelman and co? http://www.stat.columbia.edu/~gelman/research/unpublished/Bayesian_Workflow_article.pdf Contains reams of guidance!
To state the obvious, I’ve found prior predictive checks can really help quickly ‘debug’ the model architecture as you add more cornices / finials / gargoyles
Tiny changes in parameters and/or introducing new features with accidentally large ranges (e.g that you didn’t standardize beforehand) can play havoc with marginals.