Models with different pooling give very different results

There are warnings like The acceptance probability in chain 4 does not match the target. It is 0.903201352786, but should be close to 0.8. sometimes for some chains in both models, with actual acceptance probabilities about 0.85-0.95. As I understand reading some questions here, this isn’t considered a big deal.

I plotted y_true vs observed y (using Deterministic for y_true and getting its values directly from the trace instead of sample_ppc), and the plots look like a straight line + noise. Plots of y_true vs residuals y - y_true also have little visible structure.

y_true vs y for 1st model:
image
y_true vs y for 2nd model:
image
y_true for 1st vs 2nd model:
image

The means (and median) for a and b are reasonably close for both models - certainly much closer than those of p. See the means for 1st model (complete pooling) vs 2nd:
For a -0.03 vs 0.03
For b [0.025, 0.4] vs [0.005, 0.41]
For p [0.28, 0.23] vs [0.52, 0.35]

Still no idea what the issue may be or how to debug it further :frowning: