I understand your approach now. I was a bit wary of a double transformation, but mathematically your point make sense, standardising after taking the log should be equivalent to a lognormal likelihood. However, I’m still puzzled about the difference in results. When I run the model with your suggested standardised-log-transformed RTs, the results seem way more sensible than with the lognormal likelihood (maybe there’s some nuance I’m failing to spot). Here’s what I obtained, including some residual plots as you suggested:
Posterior predictive checks (PPCs):
Regression (one condition):
Residuals:
This seems like the way to go to me. The residuals present a one-sided long tail, but a StudentT may properly account for that, as suggested by the PPCs. The regression seems to fit much better as well.
Thank you very much (=


