Uncertainty Propagation according to GUM

Thanks for your reply.
The reason why I do not use the Skellam distribution is that my full model of evaluation is much more complicated:
Y = (X1/(X2X3)) * (X4/X5 - X6X7 - X8/X9)

In the full model mentioned above, only X4 and X8 are from Poisson distributions. All other variables are continuous with distributions such as rectangular, triangular, and normal. To simplify things at the beginning, I started with the model Y = X1-X2.

At first, I wanted to investigate whether the posterior probability density function, obtained by updating a sufficiently broad rectangular prior with a Poisson probability mass function as the likelihood, remains a Poisson distribution.

However, as far as I understand, PyMC provides the posterior for the parameter mu of the Poisson pmf in my example

    # Likelihood (sampling distribution) of observations
    x1_obs = pm.Poisson("x1_obs", mu=x1, observed=x1_data)

(See the full example in the end of the post.)

In this scenario, the standard deviation (sd) of the posterior appears to be very small due to the central limit theorem (?).


Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [x1_prior]
Sampling 4 chains, 0 divergences ━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 / 0:00:18
[Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 19 seconds.
           mean     sd  hdi_3%  hdi_97%  ...  mcse_sd  ess_bulk  ess_tail  r_hat
x1_prior  2.086  0.046   1.999     2.17  ...    0.001    1546.0    2893.0    1.0

Consequently, the obtained posterior does not correspond with the assumed prior. Based on the expectation value of the Poisson pmf denoted as N, the posterior should have a standard deviation of sqrt(N), but the PyMC output is much smaller. So, I conclude that my (pseudo) deterministic part of the PyMC model is wrong:

# Deterministic propagation
y = x1

What aspect am I misunderstanding?

Just to clarify my thought process: ideally, I would obtain the desired posterior density directly from the Markov formula. But it seems that the concept behind PyMC’s functioning is different, isn’t it? (I was hoping that I could formulate my model evaluation as a kind of regression task in PyMC.)

In my next step regarding my PyMC tests, I then attempted to implement the model Y = X1-X2 as described in the initial post.

####################################################

import arviz as az
import numpy as np
import pymc as pm

# Initialize random number generator
RANDOM_SEED = 82345
rng = np.random.default_rng(RANDOM_SEED)
az.style.use("arviz-darkgrid")

# True parameter values
x1_true = 2

# Size of dataset
trials = 1000

# Predictor variable
x1_data = np.random.poisson(x1_true, trials)

input_quantity1 = pm.Model()

with input_quantity1:
    # Priors for unknown model parameters    
    x1 = pm.Uniform('x1_prior', lower=0, upper=100)
    
    # Deterministic propagation
    y = x1
    
    # Likelihood (sampling distribution) of observations
    x1_obs = pm.Poisson("x1_obs", mu=y, observed=x1_data)
    
    # Posterior sampling    
    x1_sample = pm.sample(1000, chains=4, cores=4)
    
az.plot_trace(x1_sample)
print(az.summary(x1_sample))
az.plot_posterior(x1_sample)