Thanks for the suggestion! That makes a lot of sense. I updated to the coefficient priors to use sigmas of 10 and replaced the HalfCauchy with an Exponential (lambda=0.5). The model actually exhibited divergences with these parameters but they disappeared when I pushed target_accept even further to 0.999. Interestingly the result is essentially the same.
This makes me think that the issue isn’t so much with the parameterization but something related to the data itself. Additional suggestions still very welcome…