Posterior distribution, always normal on a simple model like described?

I have a question on one of the simplest form of inference models; observed values are drawn from a distribution, and a pymc3 model with a likelihood distribution matching that of the draws is setup with uniform priors on the parameters of the likelihood function. Something like this:

mean= 5
sd = 0.1

observed = st.norm.rvs(loc=mean, scale=sd,size=number_obs)

with pm.Model() as m:
mean_prior = pm.Uniform("mean_p",lower=0,upper=15)
sd_prior = pm.Uniform("sd_p",lower=0,upper=1)

If one draws 100 random observed numbers from lets say a normal distribution,
and then setup a pymc3 model with a normally distributed likelihood function, with uniform priors on mu and sd, will the posterior on the parameters myu and sd always converge towards normally distributed?

I mean if the number of observed values is sufficiently high, will not a normally distributed posterior be the case for sampling from any similar simple model with a “correct” likelihood function? Is normally distributed posteriors on the parameters be an indication that you have chosen a correct likelihood function?

As another example, the weibull distribution has two parameters alpha and beta.
If we draw from a weibull distribution, and then estimate the “unknown” parameters by choosing a weibull likelihood function, will not our posteriors on alpha and beta always be normally distributed around the true values of the parameters?

1 Like

Yes and no (but mostly no). You can work through this by looking at models using conjugate priors, where posteriors are known exactly. Inspecting this table, for example, shows that many models will have beta and gamma posteriors among many other sorts of distributions (including normal). One could argue that many of these cases converge on something “normal-like” (that is, they look normal) in the limit of infinite (diagnostic) data (e.g., a spike), but that doesn’t necessarily mean it’s normally distributed.