My impression is that the underlying computation won’t change much between our two cases because the entire model’s log posterior density will depend on the difference between mu and y_true which is used the same either way under the hood.
By having uncertain measurements, you’re really saying that you have a prior belief as to what the true values are, so including that prior distribution in the model is the same thing as saying you have some noisily observed data. The model “sees” all the information that you have - it’s been supplied the y and the y_std. I wrote it the way I did largely out of personal preference. I find it easier to follow the flow of the code if all the information about the noisy measurements is contained in the single line y_true = pm.Normal('y_true',y, y_std, shape=n) and then I can forgot about it when designing the rest of the model.
I will note that this shows off how PyMC3 is useful not just for sampling from complicated posterior distributions (as is usually the desired case) with observed data but also for sampling from prior distributions with complicated structure as well. In the latter case, there’s no update from prior-to-posterior, yet the calculations can still be done and we can draw samples.