Abstract
Twin support vector machines are a recently proposed learning method for binary classification. They learn two hyperplanes rather than one as in conventional support vector machines and often bring performance improvements. Multi-view learning is concerned about learning from multiple distinct feature sets, which aims to exploit distinct views to improve generalization performance. In this paper, we propose multi-view twin support vector machines by solving a pair of quadratic programming problems. This paper gives a detailed derivation of the Lagrange dual optimization formulation. The linear multi-view twin support vector machines are further generalized to the nonlinear case by the kernel trick. Experimental results demonstrate that our proposed methods are effective.
Keywords
Get full access to this article
View all access options for this article.
