I’m trying to run an MRP model like the following:

with pm.Model() as linear_model:

```
beta_1 = pm.Normal("beta_1",
0.0,
1.0,
shape=(1, vote_share.shape[1]))
beta_2 = pm.Normal("beta_2",
0.0,
1.0,
shape=(1, vote_share.shape[1]))
μ_state = beta_1 * prop_var_1[:, None] + beta_2 * prop_var_2[:, None]
alpha_state = hierarchical_normal("state",
n_region,
μ=μ_state,
num_parties=vote_share.shape[1])
β0 = pm.Normal(
"β0",
0.0,
3.0,
shape=(1, vote_share.shape[1]),
)
α_age = hierarchical_normal("age", n_age, num_parties=vote_share.shape[1])
α_gender = hierarchical_normal("gender", n_gender, num_parties=vote_share.shape[1])
η = (
β0
+ alpha_state[region_idx]
+ α_age[age_idx]
+ α_gender[gender_idx]
)
p = pm.math.softmax(η, axis=1)
obs = pm.Multinomial("O", n, p, shape=(len(vote_share), vote_share.shape[1]),
observed=vote_share)
trace_linear = pmjax.sample_numpyro_nuts(
draws=draws,
tune=tune,
target_accept=0.95,
idata_kwargs=dict(log_likelihood=False),
chains=chains
)
```

Vote share is the estimated vote share proportion for three parties (which then sums to 1), each of the parameters e.g. beta1, beta2, age have shape (1,3) which should correspond to the parameter adjustment for each party.

When I run the sampling, I find that the posteriors for beta_1 and beta_2 are almost identical but I’d have expected that if one is positive one should be negative (as the vote share should be zero-sum). Is there a mistake in the model definition?