Model Comparison: How to constrain a model to positive mu-differences?

I tried resolving the broadcasting issues by repeating the intercept_user and slope_user parameters.
block_id_idx 0, 1, 2 belong to user 0, block_id_idx 3, 4, 5 to user 1 and so on. intercept_user should then be like 0, 0, 0, 1, 1, 1, 2, 2, 2 etc.

import theano.tensor as tt

with pm.Model(coords=coords) as by_block_model:
    user_idx = pm.Data('user_idx', user_indices, dims='obs_id')
    block_id_idx = pm.Data('block_id_idx', block_id_indices, dims='obs_id')
    block_idx = pm.Data('block_idx', block_indices, dims='obs_id')
    direction_idx = pm.Data('direction_idx', direction_indices, dims='obs_id')
    experience = pm.Data('experience', gaming_exp, dims='User')
    rating = pm.Data('rating', ratings, dims='Block_ID')  # Block has indices 0,1,2. But ratings are size of n_users*n_blocks. Try block_id?
    
    # Hyperpriors:
    global_intercept = pm.Gamma('global_intercept', mu=4.0, sigma=2.0)
    global_intercept_sd = pm.Exponential('global_intercept_sd', 1.0)
    global_slope = pm.Normal('global_slope', mu=1.5, sigma=1.0)  # We expect the slope to be positive most of the time.
    global_slope_sd = pm.Exponential('global_slope_sd', 0.5)

    # Users.
    # Varying intercepts per user:
    intercept_user = tt.repeat(pm.Gamma('intercept_user', mu=global_intercept, sigma=global_intercept_sd, dims='User'), n_blocks)
    intercept_user_sd = tt.repeat(pm.Exponential('intercept_user_sd', 1.0, dims='User'), n_blocks)
    # Varying slopes per user:
    slope_user = tt.repeat(pm.Normal('slope_user', mu=global_slope, sigma=global_slope_sd, dims='User'), n_blocks)
    slope_user_sd = tt.repeat(pm.Exponential('slope_user_sd', 1.0, dims='User'), n_blocks)

    # Blocks.
    # Varying intercepts per block:
    intercept_block = pm.Gamma('intercept_block', mu=intercept_user, sigma=intercept_user_sd, dims='Block_ID')
    # Varying slopes per block:
    slope_block = pm.Normal('slope_block', mu=slope_user, sigma=slope_user_sd, dims='Block_ID')

    # Expected deviation per block:
    theta = intercept_block[block_idx] + slope_block[block_idx] * direction_idx

    projection = pm.Normal('projection', 0.0, sigma=theta, observed=df['projection'], dims='obs_id')


As you can see the model gets created now. There are a few Data containers floating around. I’m using the Block_ID (0…89) to create a slope and intercept for each single block in the data (30 users * block1, 30 * block2, 30 * block3). But since I want to estimate intercept and slope across users for each block (0,1,2), I used block_idx (0,1, or 2 for each data point) in the linear regression for the expected standard deviation in the data theta = intercept_block[block_idx] + slope_block[block_idx] * direction_idx. When using block_idx (either 0, 1, or 2) instead of block_id_idx (0…89), sampling time increases like 10x. Regardless of whether I choose block_idx or block_id_idx, I get every sampling error you can imagine: thousands of divergences, rhat > 1.05, acceptance probability, number of effective samples is smaller than 200.
If I used block_id_idx in the linear expression, is there a way to sample from the posterior using the block_idx? Like group every first, second and third block and get their group estimates?

How can I reparameterize the model? The between factor of condition isn’t even in there yet.
I just want estimates on the differences between standard deviations for blocks 1, 2 and 3, which each user does (within), but users are in different conditions (between). As a bonus it would be nice to know if experience (user-level) has an influence, or the individual rating of blocks. Doesn’t sound too hard.

I often read a common answer to some topics that everything should be straight forward to do in PyMC3. Unfortunately I don’t have a PhD in Bayesian Statistics and 15 years of computer science under the belt, so I appreciate any help! To me it’s everything but straight forward. I’m on a tight schedule, desperate and on the verge of going back to frequentist statistics. :slightly_frowning_face: The devil you know…