This shouldn’t be the case by default, as all default distributions have transformations so that whatever the sampler proposes gets mapped into a valid value. The problem in your example was not on the sampling not respecting the prior but the prior being wrong for the relevant parameter.
The going line by line strategy can be done at the model level.
- Model with only likelihood with fixed parameters (can’t do mcmc sample with it but can do prior predictive)
- Model with likelihood and pooled mean prior
- Model with likelihood and pooled mean / sigma prior
- Mean varying with coefficients * covariates
- Coefficients varying by subjects (no pooling)
- Coefficients varying by subject hierarchical (partial pooling)
…
At each step you have a fully specified and consistent model that is a simpler version of what comes next. This is not useful only for debugging but model building itself. Once you’re familiar with this you can obviously jump levels, but when you find a problem being able to go back the slow way can be really helpful