I have a question on one of the simplest form of inference models; observed values are drawn from a distribution, and a pymc3 model with a likelihood distribution matching that of the draws is setup with uniform priors on the parameters of the likelihood function. Something like this:
mean= 5 sd = 0.1 observed = st.norm.rvs(loc=mean, scale=sd,size=number_obs) with pm.Model() as m: mean_prior = pm.Uniform("mean_p",lower=0,upper=15) sd_prior = pm.Uniform("sd_p",lower=0,upper=1) pm.Normal("L",mu=mean_prior,sd=sd_prior,observed=observed)
If one draws 100 random observed numbers from lets say a normal distribution,
and then setup a pymc3 model with a normally distributed likelihood function, with uniform priors on mu and sd, will the posterior on the parameters myu and sd always converge towards normally distributed?
I mean if the number of observed values is sufficiently high, will not a normally distributed posterior be the case for sampling from any similar simple model with a “correct” likelihood function? Is normally distributed posteriors on the parameters be an indication that you have chosen a correct likelihood function?
As another example, the weibull distribution has two parameters alpha and beta.
If we draw from a weibull distribution, and then estimate the “unknown” parameters by choosing a weibull likelihood function, will not our posteriors on alpha and beta always be normally distributed around the true values of the parameters?