Partial pooling for crossed random effect factors: overspecified model?

I’m modelling data from a (simulated) experiment that has two fully crossed random effects factors: subject and item. In other words, I have N subjects, each of which was tested on M items. Items belong to one of two conditions (although that is not relevant here, I think).

If I model the data as follows (pseudocode):

response ~ grand_mean + grand_slope * condition + subject_means[s] + subject_slopes[s] * condition

and then sample using NUTS (default parameters and initialization), then I get very highly negatively correlated estimates for grand_mean and subject_means, and grand_slope and subject_slopes. This makes sense to me, since of course this model is overspecified. The Bayesian solution to this is partial pooling, where I have

subject_means ~ Normal(mu_a, sd_a)
subject_slopes ~ Normal(mu_b, sd_b)
response ~ subject_means[s] + subject_slopes[s] * condition

This works perfectly fine, removes the very high correlation (except of course between subject_means and mu_a, but those are expected and healthy), and gives a much cleaner posterior (and is also much faster to sample). So far so good.

However, remember that I have two random factors here, the other being item. If I add the item effect (slope effect only) back in:

subject_means ~ Normal(mu_a, sd_a)
subject_slopes ~ Normal(mu_b, sd_b)
response ~ subject_means[s] + subject_slopes[s] * condition + item_slopes[i] * condition

then I end up with the same problem: poor and slow sampling due to extremely high (negative) correlations between item_slopes and subject_slopes. Now I’m at a loss as to how to incorporate the same partial pooling trick for the item factor. I tried the following:

subject_means ~ Normal(mu_a, sd_a)
subject_slopes ~ Normal(mu_b, sd_b)
item_slopes ~ Normal(mu_c, sd_c)
response ~ subject_means[s] + subject_slopes[s] * condition + item_slopes[i] * condition

but that still leaves the model overspecified. Note that the design here is crossed, and not nested: the same items are presented to all the subjects. Therefore, it doesn’t (I think) make sense to have something like item_slopes ~ Normal(mu=subject_slopes) or so. Especially since each item is only presented exactly once to each subject.

Does anyone have any ideas on how to proceed here? I don’t necessarily need/want to use partial pooling across the items, the only reason I put it in here is to try to avoid the sampling issues (which worked very well when adding it in across the subjects). I guess there might be a way to keep the overspecified model, yet get good samples?

1 Like

OK the solution here seems to be to simply specify the item-level random effects as zero-mean. In other words, I won’t be able to estimate the actual item-level slopes, but I’m nonetheless controlling for the item-level variance. A pretty obvious solution actually, now that I think about it.

In pseudocode terms, simply changing the third line of the last model did the trick:

item_slopes ~ Normal(0, sd_c)
3 Likes