Sparse group lasso and high dimensional multinomial classification

Martin Vincent, N.R. Hansen

65 Citations (Scopus)
211 Downloads (Pure)

Abstract

The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso algorithm is available in the R package msgl. Its performance scales well with the problem size as illustrated by one of the examples considered - a 50 class classification problem with 10 k features, which amounts to estimating 500 k parameters.
Original languageEnglish
JournalComputational Statistics & Data Analysis
Volume71
Pages (from-to)771-786
Number of pages16
ISSN0167-9473
DOIs
Publication statusPublished - 1 Mar 2014

Fingerprint

Dive into the research topics of 'Sparse group lasso and high dimensional multinomial classification'. Together they form a unique fingerprint.

Cite this