Zero as a prior in penalized regression can be improved upon.
•
Priors for regularized regression can be sourced from human decision heuristics.
•
Heuristics selectively discard information like covariance between cues.
•
These priors are simple, robust, interpretable and work well across domains.
•
Our approach extends to the analysis of brain imaging data as well.
Abstract
Induction benefits from useful priors. Penalized regression approaches, like ridge regression, shrink weights toward zero but zero association is usually not a sensible prior. Inspired by simple and robust decision heuristics humans use, we constructed non-zero priors for penalized regression models that provide robust and interpretable solutions across several tasks. Our approach enables estimates from a constrained model to serve as a prior for a more general model, yielding a principled way to interpolate between models of differing complexity. We successfully applied this approach to a number of decision and classification problems, as well as analyzing simulated brain imaging data. Models with robust priors had excellent worst-case performance. Solutions followed from the form of the heuristic that was used to derive the prior. These new algorithms can serve applications in data analysis and machine learning, as well as help in understanding how people transition from novice to expert performance.