Yes! You are correct about the various ingredients be included (or not). So I guess having the likelihood of the observed data will allow some computational savings because you don’t have to (re) calculate them. The other way to get some (lots) of computational savings would be to vectorize the loops you are currently using. You can collapse over the chains by flattening the samples into giant 1D vectors (e.g. using arviz.extract()). Then you should be able to do this line (not exactly this line, but something very similar) once:
ll_x[c,d,s] = stats.norm(xs[c, d, s], x_sigma[c, d, s]).logpdf(x_subj[s]).sum()
Or if you don’t want to deal with the indexing over subjects, you can loop over subjects and perform a version of the above line once per subject. That may be fast enough for you.