Hi Community,

The graph below displays mean reaction time human data as a function of age in a simple two-choice task with two experimental conditions (compatible and incompatible).

I would like to model these data with N(μ, σ), with either

μ = α * pm.math.exp(-β * Age) + γ [exponential model]

or μ = α * Age**-β + γ [power model]

For each model variant (exponential vs power), I would further like to constrain β to be identical between compatible and incompatible conditions.

For the power model, I computed:

```
with pm.Model() as model_pow:
α_c = pm.HalfCauchy('α_c', 10)
α_i = pm.HalfCauchy('α_i', 10)
β = pm.Normal('β', 1, 0.5)
γ_c = pm.Normal('γ_c', Mean_RT_comp_Indiv.mean(), .15)
γ_i = pm.Normal('γ_i', Mean_RT_incomp_Indiv.mean(), .15)
σ = pm.HalfNormal('σ', .1)
μ_c = α_c*Years_indiv**-β + γ_c
μ_i = α_i*Years_indiv**-β + γ_i
y_obs_comp = pm.Normal('y_obs_comp', μ_c, σ, observed=Mean_RT_comp_Indiv)
y_obs_incomp = pm.Normal('y_obs_incomp', μ_i, σ, observed=Mean_RT_incomp_Indiv)
if __name__ == "__main__":
with model_pow:
trace_power = pm.sample (4000, chains = 4, core = 4, tune = 2000, target_accept = .95)
az.plot_trace(trace_power)
```

For the exponential model, I computed:

```
with pm.Model() as model_exp:
α_c = pm.HalfCauchy('α_c', 10)
α_i = pm.HalfCauchy('α_i', 10)
β = pm.Normal('β', 1, 0.5)
γ_c = pm.Normal('γ_c', Mean_RT_comp_Indiv.mean(), .15)
γ_i = pm.Normal('γ_i', Mean_RT_incomp_Indiv.mean(), .15)
σ = pm.HalfNormal('σ', .1)
μ_c = α_c*pm.math.exp(-β*Years_indiv) + γ_c
μ_i = α_i*pm.math.exp(-β*Years_indiv) + γ_i
y_obs_comp_exp = pm.Normal('y_obs_comp_exp', μ_c, σ, observed=Mean_RT_comp_Indiv)
y_obs_incomp_exp = pm.Normal('y_obs_incomp_exp', μ_c, σ, observed=Mean_RT_incomp_Indiv)
if __name__ == "__main__":
with model_exp:
trace_exp = pm.sample (4000, chains = 4, core = 4, tune = 2000, target_accept = .95)
az.plot_trace(trace_exp)
```

Unfortunately, because I am new on this forum, I can’t incorporate additional files showing trace plots. I see sampling issues and get warnings such as:

There were 13 divergences after tuning. Increase

`target_accept`

or reparameterize.

There were 3 divergences after tuning. Increase`target_accept`

or reparameterize.

The acceptance probability does not match the target. It is 0.879040376552122, but should be close to 0.8. Try to increase the number of tuning steps.

There were 18 divergences after tuning. Increase`target_accept`

or reparameterize.

The number of effective samples is smaller than 25% for some parameters.

How can I solve or attenuate these sampling issues?

In addition, I would like to compare the performance of the power vs the exponential model using WAIC. I thus computed:

WAIC = az.compare({‘pow’:trace_power, ‘exp’:trace_exp})

returning the following error:

TypeError Traceback (most recent call last)

in

----> 1 WAIC = az.compare({‘pow’:trace_power, ‘exp’:trace_exp})

2 WAIC~\Anaconda3\envs\bayes\lib\site-packages\arviz\stats\stats.py in compare(dataset_dict, ic, method, b_samples, alpha, seed, scale)

153 for name, dataset in dataset_dict.items():

154 names.append(name)

–> 155 ics = ics.append([ic_func(dataset, pointwise=True, scale=scale)])

156 ics.index = names

157 ics.sort_values(by=ic, inplace=True, ascending=ascending)~\Anaconda3\envs\bayes\lib\site-packages\arviz\stats\stats.py in waic(data, pointwise, scale)

987 )

988 if “log_likelihood” not in inference_data.sample_stats:

–> 989 raise TypeError(“Data must include log_likelihood in sample_stats”)

990 log_likelihood = inference_data.sample_stats.log_likelihood

991TypeError: Data must include log_likelihood in sample_stats

It looks like I need to compute the log likelihood but don’t know how to proceed. Any idea?

Many thanks for your help