Sorry for my misunderstanding and many thanks for the clarification. However, I still don’t quite see how sampling from reduced versions of the data would help me to understand the connection between expectation/predicted mean and the likelihood distribution/noise function. Personally, I prefer to define a model as soon as I design an experimental task, as the model is tightly linked to the question I’m interested in answering. If that model fails, then a different model can be tried out on the basis that it is able to answer the same question. For instance, if I sample from one participant and one condition, wouldn’t that imply an inherently different data structure as compared to the data of fifty participants and two conditions? If the structure is different, then it would not be the structure required to answer the question at hand. But I may be wrong on this, would be great to have a rebuttal on this point to get a deeper consideration no the topic. Thanks again.