Abstract
Feature selection is an important machine learning topic, especially in high dimensional applications, such as cancer prediction with microarray data. This work addresses the issue of high dimensionality of feature selection for linear and kernel-based Support Vector Machines (SVMs) considering second-order cone programming formulations. These formulations provide a robust and efficient framework for classification, while an adequate feature selection process avoids errors in the estimation of means and covariances. Our approach is based on a sequential backward elimination which uses different linear and kernel-based contribution measures to determine the feature relevance. Experimental results with microarray datasets demonstrate the effectiveness in terms of predictive performance and construction of a low-dimensional data representation.
Keywords
Get full access to this article
View all access options for this article.
