Hi,

Recently, I felt something very weird here. I am doing some toy examples for code verification. Here, I believed that two models should return the same results. But, the only second model returns me correct results (the first model gives me posterior distributions around 0)… The only difference between two models are whether I implied shape as a vector or int. I really want to know why such things happen. Could someone please explain the reason for the situation?

```
with pm.Model() as model1:
n_model_1 = df_x.shape[1]
sigma_ard = pm.Gamma("sigma_ard", alpha = 2, beta = 0.01, shape = n_model_1)
beta = pm.Normal("beta", mu = 0, sigma = sigma_ard, shape = n_model_1)
sigma_ard0 = pm.Gamma("sigma_ard0", alpha = 2, beta = 0.01)
beta0 = pm.Normal("beta0", mu = 0, sigma = sigma_ard0)
sigma = pm.Gamma("sigma", alpha = 2, beta = 0.01)
y = pm.Normal("y", mu = pm.math.dot(df_x, beta) + beta0, sigma = sigma, observed = df_y)
sample = pm.sample()
az.plot_posterior(sample, var_names= "beta");
```

```
with pm.Model() as model2:
n_model_1 = df_x.shape[1]
sigma_ard = pm.Gamma("sigma_ard", alpha = 2, beta = 0.01, shape = (n_model_1,1))
beta = pm.Normal("beta", mu = 0, sigma = sigma_ard, shape = (n_model_1,1))
sigma_ard0 = pm.Gamma("sigma_ard0", alpha = 2, beta = 0.01)
beta0 = pm.Normal("beta0", mu = 0, sigma = sigma_ard0)
sigma = pm.Gamma("sigma", alpha = 2, beta = 0.01)
y = pm.Normal("y", mu = pm.math.dot(df_x, beta) + beta0, sigma = sigma, observed = df_y)
sample = pm.sample()
az.plot_posterior(sample, var_names= "beta");
```