The horseshoe prior is also a continuous form of the spike and slab. The only problem is that if you have a continuous prior, you have a continuous posterior, so there’s no way to get non-zero probability mass at zero. Oddly, the paper you linked doesn’t seem to mention this. If all you care about is predictive performance, shrinking to nearly zero is good enough (after you take scale of covariates into account). But if you have a bajillion covariates and want to trim them for run-time speed, there’s a post-processing step you need to do.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Bayesian sparse logistic regression in PyMC3 | 2 | 716 | February 7, 2020 | |
| How to add an L1 Regularization on the likelihood when use pymc3 to sample a MCMC | 5 | 2498 | April 20, 2019 | |
| L1 and L2 pegularization for an Autoregressive model | 4 | 1232 | January 1, 2020 | |
| Regularization/Survival: should I? | 6 | 868 | October 27, 2017 | |
| L1 regularization with positivity constraint in hierarchical model | 2 | 582 | June 15, 2021 |