# How to force estimated variables into another distribution variable

So, recently I was wondering whether it would be possible to probabilistically represent a whole distribution out of the parameters that a Pymc3 model fits.

For example, using the following code, I can calculate the mean, \mu, as well as the standard deviation, \sigma, from a normally generated dataset easily:

## Data generated from a normal distribution, with mean 5 and std 2
normal_data = np.random.normal(5,2,100)

with pm.Model() as model:

# Priors (a bit narrow only for the sampler's convenience)
mu_sigma = pm.HalfNormal('mu_sigma', sd = 10)
mu = pm.Normal('mu', mu = np.mean(normal_data), sigma = mu_sigma)

sigma = pm.Uniform('sigma', lower = 0.001, upper = 10)

# Likelihood
likelihood = pm.Normal('likelihood', mu = mu, sigma = sigma, observed = normal_data)

# Distribution that I would like to recover
recovered_normal = pm.Normal('recovered_normal', mu, sigma) # I am not sure what this accomplishes

trace = pm.sample(cores=4, draws=2000, tune=2000, target_accept=0.95)


Using pm.summary(), I would like to see that recovered_normal has the same values as the 2 fitted priors composing it.

However, the estimated \mu and \sigma do not seem to align (though they’re close) with recovered_normal's variables, which is what I was aiming for.

In the end, how could I create a variable object that perfectly draws on variables fit by a model? The reason I am asking this is because that would allow for for convolution with fitted distributions, since math operations already work with regular pm.Normal()'s (or any other priors) if put in a pm.Deterministic().

Any advice would be dearly appreciated.

I dont think they will align - as recovered_normal is in effect a marginalized distribution marginalized over the posterior of mu and sigma.

I am not sure I understand your motive:

I am inclined to say that if you are trying to do convolution with some posterior distribution as kernel, what you are doing sounds fine to me.

So, to clarify myself, under Pymc3 it is already possibly to do elementary math around the prior variables if they are put into pm.Deterministic(), which seems to “lock in” those priors:

with pm.Model() as model:

# Distributions
a = pm.Normal('a', 2, 2)
b = pm.Gamma('b', 4, 0.5)

c = pm.Deterministic('c', a + b)

trace = pm.sample(cores=4, draws=2000, tune=2000)


c is a successful convolution of priors a and b.

What I am asking is, essentially, if one makes a distribution from a and b called AB (like recovered_normal earlier), and there are also 2 other priors, c and d, making up another distribution CD, could one also add these two distributions together, AB + CD?

Would appreciate any thoughts on this.

I see. In this case you can just add them together, if your chains are equilibrium this should be valid. However, you might have issue running inference as your model is likely unidentifiable.

Ok, so if that can be done, then my ultimate question is, can that also be done out of priors that have been affected by data in a model?

This is what I am ultimately after, and I hope you can provide some insight on that, junpenglao.

That’s my concern as well, if your model is unidentifiable because of the convolution, likely you would get pretty bad inference.

Looking over resources on identifiability, I think I understand what you mean by that, that because convolution produces a non-standard distribution, it can lead to problems?