Foreword: I’m very new to Bayesian statistics and coding.
The values that are displayed on the pm.plot_posterior() when mode is set as the point estimate are vastly different to those returned by .find_MAP for my model.
My model is likely rubbish but for the time being I wanted to be able to reference the values displayed on the pm.plot_posterior(). How can I obtain these values?
For some additional context I am trying to produce an L1 regularised Bayesian linear model for comparison with a classical L1 regularised linear model.
My data have been centred and min-max scaled and the model does not contain an intercept.
I’ve actually gone about extracting the relevant values from L1_blm_trace (see below) and then used sklearn’s KDE (type=“gaussian”, which isn’t a perfect approximating distribution as the lam_sq values are strictly positive) to try and obtain value of lam_sq that returns the maximum value of the log probability. The value returned by sklearn’s KDE is much closer to the pm.plot_posterior() mode than those returned by .find_MAP() but still not close enough.
Like I said, my model is probably rubbish, but it would be great to know how pm.plot_posterior() is getting its value for the mode.
L1_blm = pm.Model() with L1_blm: lam_sq = pm.Gamma("lam_sq", alpha=0.001, beta=0.001) sigma_sq = pm.InverseGamma("sigma_sq", alpha=0.001, beta=0.001) tau_sq = pm.Exponential("tau_sq", lam=(0.5 * lam_sq), shape=(X.shape, 1)) betas = pm.Normal("betas", mu=0, tau=(1 / (sigma_sq * tau_sq)), shape=(X.shape, 1)) mu = tt.dot(X_train, betas) likelihood = pm.Normal("likelihood", mu=mu, tau=(1 / sigma_sq), observed=y_train) L1_blm_step = pm.NUTS() L1_blm_trace = pm.sample(10750, L1_blm_step, chains=2)