I am trying to cross train a model to test it’s fit and make inference about the posterior.

The model is based on the idea of how people may think when allocating money between two people with uneven payoffs. They may give the same amount or just maximize how much is given.

The payoff proportions are my x variables and the y variables is what percentage of money was given to person one (as opposed to person two).

I read all the tutorials and have searched all over for why this model has so many divergences.

```
train_x = np.array([ 0.7837751 , 0.19241506, -0.2124012 , -0.82463659, 0.78311322, 0.39562878, -1.13989023, -0.86822927, -0.60025648, 0.3742629 ])
train_y = np.array([5.63498758e-01, 2.88627774e-01, 2.92535202e-01, 3.35866828e-01, 6.67700455e-01, 9.99999856e-01, 1.45195098e-07, 1.34094597e-07, 1.02634229e-07, 9.99999885e-01])
hold_out_x = np.array([-1.47333049, 0.19393238, -0.21990887, 0.74694174, -0.34769498, 0.26646929, 0.72834725, -0.47401384, -1.11955062, 0.46028331])
hold_out_y = np.array([0.21395213, 0.41452217, 0.47663821, 0.79562313, 0.08839646, 0.62057976, 0.765773 , 0.28458744, 0.1346239 , 0.69640536])
shape = len(train_x)
with pm.Model() as model_weighted:
beta = pm.Normal('sigmoid_beta_param', mu=-.5, sigma=1, shape=shape)
slopes = pm.Data("slopes", train_x)
alpha = pm.Uniform('fair_maximizer_param', lower=0, upper=1, shape=shape)
# sigmoid so the value is between zero and one. alpha acts as tendency to be fair or maximizer.
mean_param = pm.Deterministic("sigmoid", alpha*1/(1+tt.exp(beta*slopes)) + (1-alpha)*.5)
# since output value is between zero and 1, use truncated normal
x = pm.TruncatedNormal('obs', mu=mean_param, sigma=.1, lower=0.0, upper=1, observed=train_y)
trace_new = pm.sample(5000, tune=5000)
```

result: [40000/40000 06:33<00:00 Sampling 4 chains, 19,988 divergences]

Changing the number of chains did not help, or the values in sample. This also happens for sampling the posterior.