Hi,

I am really hoping someone may be able to help me figure out this simple issue:

I have some data-list (called obs) of length x, with a normal prior on the mean with parameters mu and sd, and a halfnormal prior on the standard deviation of the data, obs_sd. I believe my data may be normally distributed, hence i have a normal likelihood function.

Before specifying the data i then have the following model:

mu = 10

sd = 2

obs_sd = 1

```
with pm.Model() as model:
prior_m = pm.Normal('p_m', mu=mu, sd=sd)
prior_sd = pm.HalfNormal('p_sd', sd=obs_sd)
L = pm.Normal('L', mu=prior_mu, sd=prior_sd, observed = obs)
```

Here is my issue:

If i have only one observed value in obs i get divergencies with this model, if i have two, or more, it seems to be fine. Also, if i set a known standard deviation on the likelihood, it seems to be fine with one variable.

So my question is: Is the lack of multiple observed values somehow affecting the posterior of the standard deviation of the likelihood? If so, why/how?