Adding a known amount of random error to a model

Thanks for your response. I think I’m comfortable with the process, I guess I’m just looking for a way to reflect the added uncertainty in the system and what that means for the range of values prem could take given the observed data.

Say I have 50 data points - so 50 p(base) measured with error, and 50 observed outcomes. I would like to add 50 error terms to my p(base) values taken from the normal distribution. I could then take a new sample of 50 error terms on every proposal loop that forms the chain.

In a MCMC algorithm that I write from scratch I can just put this within a code section that gets looped on every proposal, so I’m wondering how I would implement this in pymc3? Is something like the following correct?

base_LO = np.log((base/(1-base)) # convert array of bases into log odds

errLO = pm.Normal('errLO',mu=0,sd=0.2)
pS_LO = base_LO + prem+ errLO # added error term
pS = np.exp(pS_LO)/(1+np.exp(pS_LO)) # back to a probability to be used in bernoulli

I seem to be getting sensible results, I just want to make sure pymc3 is doing what I think it is doing, as an output chain for errLO is given alongside prem as if I were trying to infer errLO, but of course I’m not as it is a known and fixed distribution no matter what outcomes are observed. So I don’t want the algorithm to propose values for errLO on every loop, I just want it to sample from this known distribution. If the above is incorrect, how would I go about implementing this?