# Model convergence/estimation problem for a simple prospect theory model

Dear pymc experts,

I am using pymc to fit a simple prospect theory model to the risky choice data. Here is my model code:

`````` coords  = {'subj':list(df['sub'].astype('str').unique())}

with pm.Model(coords=coords) as cpt_m1:

## group mean
alpha_mu = pm.Normal("alpha_m", mu = 0, sigma = 3) ## risk aversion parameter
eta_mu = pm.Normal('eta_m', mu = 0, sigma = 3)     ## probability distortion parameter
lam_mu = pm.Normal('lam_m', mu = 0, sigma = 3)     ## loss aversion parameter
theta_mu = pm.Normal('theta_m', mu = 0, sigma = 3) ## temperature parameter

## group sigma
alpha_s = pm.HalfNormal("alpha_s", sigma = 3)
eta_s = pm.HalfNormal('eta_s', sigma = 3)
lam_s = pm.HalfNormal('lam_s', sigma = 3)
theta_s = pm.HalfNormal('theta_s', sigma = 3)

## individual raw parameters
alpha_raw = pm.Normal('alpha_r', mu = 0, sigma = 1, dims = "subj")
eta_raw = pm.Normal('eta_r', mu = 0, sigma = 1, dims = "subj")
lam_raw = pm.Normal('lam_r', mu = 0, sigma = 1, dims = "subj")
theta_raw = pm.Normal('theta_r', mu = 0, sigma = 1, dims = "subj")

## matt trick
alpha = pm.math.exp(alpha_mu + alpha_s * alpha_raw)
eta = pm.math.exp(eta_mu + eta_s * eta_raw)
lam = pm.math.exp(lam_mu + lam_s * lam_raw)
theta = pm.math.exp(theta_mu + theta_s * theta_raw)

## compute utility
## prelec probability weight function
sp_gain  = pm.math.exp(-((-pm.math.log(p_gain_))**eta[rep_count]))
sp_loss = pm.math.exp(-((-pm.math.log(p_loss_))**eta[rep_count]))

u_take = sp_gain * (v_gain_**alpha[rep_count]) - lam[rep_count] * sp_loss* (v_loss_ ** alpha[rep_count])

## link utility with choice data
llf = pm.math.invlogit(u_take * theta[rep_count])
y = pm.Bernoulli('llf',p=llf,observed=choice_)
cpt_m1_r = pm.sample(draws=4000,tune=5000,chains=4,cores=4)
``````

In this simple model, v_gain_ is the gain outcome for risky option, v_loss_ is the loss outcome. p_gain_ and p_loss_ are the probability for gain outcome and loss outcome respectively.
When i estimate the parameter, the beginning of sampling is super fast until reach to around 8000 iterations and then become super slow. And the result is weird, basically there is no parameter value is sampled from the target distribution.

And the data seems fine. If i only use one intercept to model the data, there is no such problem. Also, same model in Stan works fine. I am pretty puzzled about this problem. It would be great if you can help me with this problem. Thanks in advance!

Best,

Do you have a sample of the observed data?