Dear experts,

I am performing some tests with a fairly basic model, where I define the likelihood of a 1-bin distribution and I use a uniform prior. In addition, I introduce a Gaussian nuisance parameter in the likelihood.

This is implemented as follow:

import pymc3 as pm

import numpy as npmodel = pm.Model()

with model:

truth = pm.Uniform(â€˜truthâ€™, lower=0., upper=300.)

gaus = pm.Normal(â€˜gaus_syst1â€™, mu=0., sigma=1.0)

pois = pm.Poisson(â€˜poissonâ€™, mu=truth*(1+0.1*gaus), observed=100)

trace = pm.sample(10000, tune=1000, nuts_kwargs={â€˜target_acceptâ€™:0.95})print(â€˜NP mean = {}â€™.format(np.mean(trace[â€˜gaus_syst1â€™])))

print(â€˜NP rms = {}â€™.format(np.std(trace[â€˜gaus_syst1â€™])))t1â€™]))print(â€˜truth mean = {}â€™.format(np.mean(trace[â€˜truthâ€™])))

I get the following results:

NP mean = -0.10184846101085772

NP rms = 1.0066925641532645truth mean = 103.01576766052715

My expectation is that the maximum of the posterior probability should be at (truth, gaus) = (100, 0), since this should also correspond to the maximum likelihood.

So I canâ€™t understand why the posterior distribution of the gaussian nuisance parameter (mean = -0.1) is shifted with respect to the prior one (mean = 0).

Am I missing something obvious?