My model is defined like this:
with pm.Model() as model:
eps = pm.Uniform('eps', lower=2.0, upper=12.0, shape=3)
simulated_data = simulator_op(eps, true_losstangent, true_thicknesses)
pm.Normal('obs', mu=simulated_data, sigma = 0.5, observed=synthetic_data)
trace = pm.sample(500000, tune=5000, progressbar=True, return_inferencedata=True)
where eps is the parameter needed to be estimated with 3 values. I didn’t define my own likelihood function and just wrapped up my simulation function as the black box function simulator_op, which has 3 parameters as input. I just estimate the eps, and the other two parameters are given as true constant vector values. However, I got r_hat of 2.1 for all three values of the eps.
Multiprocess sampling (4 chains in 4 jobs)
Metropolis: [eps]
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail \
eps[0] 8.331 1.327 6.045 9.246 0.663 0.508 5.0 24.0
eps[1] 5.553 0.842 4.994 7.014 0.421 0.323 5.0 24.0
eps[2] 8.390 1.222 7.576 10.540 0.611 0.468 5.0 24.0
r_hat
eps[0] 2.1
eps[1] 2.1
eps[2] 2.1
true eps: [8.8 5.2 7.8]
estimated eps: [8.331 5.553 8.39 ]
error: [5.32954545 6.78846154 7.56410256] %
The estimated eps seems good with accepetable error. Should I trust the inference given such high r_hat values? I wonder if the sigma value has an influence on the r_hat value.