Model Comparison: How to constrain a model to positive mu-differences?

Ah I see, the observed is a square array (n_subject * n_trials). In that case, my suggestion would be to flatten the observed and use indexing to construct the linear predictors, it is better for missing trials etc - something like:

with pm.Model() as projection_model:
    projection_model.name = "Main Effect Projection"
    # better to also model mu
    mu_ortho = pm.Normal(...)
    mu_parallel = pm.Normal(...)
    sigma_ortho = pm.Gamma("sigma_ortho", mu=3.34, sigma=1.44, shape=n_users)
    sigma_parallel = pm.Gamma("sigma_parallel", mu=7.0, sigma=3.17, shape=n_users)
    
    obs_parallel = pm.Normal(f"obs_parallel",
                             mu=mu_parallel[parallel_idx],
                             sigma=sigma_parallel[parallel_idx],
                             observed=parallel_data_flatten)
    obs_orthogonal = pm.Normal(f"obs_orthogonal",
                               mu=mu_ortho[orthogonal_idx],
                               sigma=sigma_ortho[orthogonal_idx],
                               observed=orthogonal_data_flatten)
    sigma_diff = pm.Deterministic('sigma_diff', sigma_parallel - sigma_ortho)

This is reaction time data? maybe try log-normal? I am not sure what is the recommended best practice nowadays as well.

Well, the language of hypothesis testing is always problematic to me as it usually dont answer what we really want to know - is this conclusion going to be useful (for making decision, for advancing our understanding, etc). As it is only useful when you have a non-tiny effect, looking at things like effect size is much more natural. In a linear model (like ANOVA), all the hypothesis you want to test are ultimately comparisons of parameters/coefficients, so instead of fitting multiple models and compare marginal likelihood or Bayes Factor, I usually recommend to fit the most comprehensive model with all the information (in this case would mean fit both conditions and coded for all subject, trial and block), and look at the distribution of the posterior contrasts (an example: https://docs.pymc.io/notebooks/BEST.html)

You need to coded the block in your model as well - for example, you can have a indexing of [0, 1, 2, …, 5] to indicate parallel_block_1, parallel_block_2, …, orthogonal_block_3, and index to a shape=6 vector of parameter.

2 Likes