Low effective sample size for likelihood's sigma

Hi all,

I have a model of an experiment where participants have to carry out hand movements. They will see their hand move on a screen which allows us to sometimes add a delay to the visual feedback. It is the participant’s task to infer whether there was a delay or not.
The idea behind the model is that for each trial participants generate motor commands that allow them to predict the onset of their visual (expected_v) and proprioceptive (expected_p) consequences. For every trial they make use of a GLM to decide whether there was a delay or not.
This means that I have three observed variables (response, sensory_v, sensory_p). The sigmas I have problems with are those for the likelihood of sensory_v and sensory_p

In code it looks like this:

coords = {"trial_id": np.arange(sensory_v.size)}

with pm.Model(coords=coords) as simplified_model:
    
    """The parameters that build the Motor command distribution are drawn from gamma distributions"""
    prior_alpha= pm.Bound(pm.Gamma, upper=10)("prior_alpha", 57, 16.97) #bounded for better sampling
    prior_beta = pm.Gamma("prior_beta", 472.89, 21.75)
    M_1 = pm.Beta("M_1", prior_alpha, prior_beta, dims="trial_id") 
# timeframe in which they can initiate a movement is limited to 4 seconds, so I multiply M by 4
    M = pm.Deterministic("M", M_1*4, dims="trial_id")
    
    """Predicted proprioceptive consequences are a function of Motor command, expected sensory delay and noise"""
    eta_p = pm.Gamma("eta_p", mu=.15, sigma=.001) 
    expected_p = pm.Deterministic("expected_p", M + eta_p)
    sigma_p= pm.Gamma("sigma_p", mu=.4, sigma=.2) 
    observed_p = pm.Gamma("observed_p", mu= expected_p, sigma= sigma_p, observed=sensory_p)

    """Same for visual consequences, just expected machine delay is added"""
    #eta_v = pm.Gamma("eta_v", mu= 0.18, sigma= 0.001)
    eta_add = pm.Gamma("eta_add", mu=.2, sigma=.05)
    eta_v= pm.Deterministic("eta_v",eta_p + eta_add)
    delta_t = pm.Uniform("delta_t", lower=0, upper= 1.5, dims= "trial_id") # prior for delay by machine
    expected_v = pm.Deterministic("expected_v",M + delta_t + eta_v)
    sigma_v = pm.Gamma("sigma_v", mu=.4, sigma=.2)
    observed_v = pm.Gamma("observed_v", mu= expected_v, sigma= sigma_v, observed=sensory_v)
    
    # These vp parameters will be used for calculating the logits and then the probability for the response
    delta_vp = pm.Normal("delta_vp",0,5)
    gamma_vp = pm.Normal("gamma_vp", mu=0, sigma=2) 
    logit = delta_t * delta_vp + gamma_vp 
    observed_response = pm.Bernoulli("response", logit_p = logit, observed=response)

    trace = pm.sample(tune=2000, draws=2000, target_accept=.9, random_seed= SEED)

Observed data is in the form:
sensory_v: array([1.034, 1.524, 0.549, 1.359, 1.42 , 0.776, 1.144, 1.038, 1.292,
0.884, 0.839…])
sensory_p: array([0.784, 1.19 , 0.549, 0.942, 1.17 , 0.693, 0.727, 0.871, 0.958,
0.717, 0.505…]
response= array([1., 1., 0., 1., 1., 0., 1., 1., 1., 0…]
Here you can see that in the third trial sensory_v was the same as sensory_p and the participant correctly answered that there was no delay.

The infered values for sigma_v and sigma_p are very low. Sigma_v: mu=.045, sd= .021, sigma_p: mu=.03, sd=.012
However, when I do this I get very low (<200) effective sample sizes for both sigmas. What I’ve tried:

  • increasing target_accept to .9, tune=5000, draws=5000.
    → this is better, sample sizes are around 500 but still it’s not very good

  • make a distribution alpha_v = pm.Gamma(mu=40, sd=20) and define sigma_v = alpha_v/100
    → this brought no improvement

  • I thought I might be able to reparameterize and tried this (don’t really know if it makes sense though)_

    expected_v = pm.Deterministic("expected_v",M + delta_t + eta_v)
    sigma_v = pm.Gamma("sigma_v", mu=.4, sigma=.2)
    z= pm.Normal("z",0,1)
    z_v = pm.Deterministic("z_v", mu= expected_v + z*sigma_v)
    observed_v = pm.Gamma("observed_v", mu= z_v, sigma= .001, observed=sensory_v)

But this attempt killed my kernel…
Does anyone know how I could get to better sigma_v, sigma_p estimates?
Any help would be greatly appreciated!