The reciprocal Bayesian LASSO

TitleThe reciprocal Bayesian LASSO
Publication TypeJournal Article
Year of Publication2021
AuthorsMallick H, Alhamzawi R, Paul E, Svetnik V
JournalStat Med
Volume40
Issue22
Pagination4830-4849
Date Published2021 Sep 30
ISSN1097-0258
KeywordsBayes Theorem, Humans, Linear Models
Abstract

A reciprocal LASSO (rLASSO) regularization employs a decreasing penalty function as opposed to conventional penalization approaches that use increasing penalties on the coefficients, leading to stronger parsimony and superior model selection relative to traditional shrinkage methods. Here we consider a fully Bayesian formulation of the rLASSO problem, which is based on the observation that the rLASSO estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters are assigned independent inverse Laplace priors. Bayesian inference from this posterior is possible using an expanded hierarchy motivated by a scale mixture of double Pareto or truncated normal distributions. On simulated and real datasets, we show that the Bayesian formulation outperforms its classical cousin in estimation, prediction, and variable selection across a wide range of scenarios while offering the advantage of posterior inference. Finally, we discuss other variants of this new approach and provide a unified framework for variable selection using flexible reciprocal penalties. All methods described in this article are publicly available as an R package at: https://github.com/himelmallick/BayesRecipe.

DOI10.1002/sim.9098
Alternate JournalStat Med
PubMed ID34126655