If that is the case, then you need to decide how to model your data. If you think your data is distributed as a asymmetric Laplacian, then you need to decide what values of the associated parameters (kappa, mu, b) might be before seeing any data. In the example I presented above, I treated each as uncertain and modeled them as following their own (prior) distribution.
I don’t. But kappa and b must take on strictly positive values and using a half normal prior ensures that only positive values are considered. If you believe an alternative is more appropriate, then you should you that.
Below would be a quick way to do what (I think) you are asking for. It’s not clear to me why there are 5 columns or how they differ. So here, I’ll just use 1.
df_for_prior = pd.DataFrame(np.random.uniform(low=1700000, high=1900000, size = (22,5)), index = np.arange(1998, 2020))
obs = np.array([1.726567e+06, 1.589836e+06, 1.643981e+06, 1.584314e+06])
with pm.Model() as model1:
kappa = pm.HalfNormal("kappa", 10)
mu = pm.Normal("mu", mu=np.mean(obs), sigma=np.std(obs))
b = pm.HalfNormal("b", 3 * np.std(obs))
# Based on distr of raw data
likelihood = pm.AsymetricLaplace("likelihood",
kappa = kappa,
mu = mu,
b = b,
observed=df_for_prior.iloc[:,0])
likelihood = pm.AsymetricLaplace("likelihood",
kappa = kappa,
mu = mu,
b = b,
observed=obs)
idata = pm.sample()
Here I just set the priors, updated based on the df_for_prior, then updated further based on obs. If you really have lots of old data, then I would suggest reading through this notebook.