TY - JOUR
T1 - Constructing boosting algorithms from SVMs
T2 - An application to one-class classification
AU - Rätsch, Gunnar
AU - Mika, Sebastian
AU - Schölkopf, Bernhard
AU - Müller, Klaus Robert
N1 - Funding Information:
The authors would like to thank Manfred Warmuth, Alex Smola, Bob Williamson, and Ayhan Demiriz for valuable discussions. They would also like to thank the anonymous referees for thorough reviews, valuable comments, and suggestions that significantly improved this work. This work was partially funded by DFG under contract JA 379/9-1, JA 379/7-1, MU 987/1-1, and by EU in the NeuroColt2 project. Furthermore, G. Rätsch would like to thank CRIEPI, ANU, and the University of California Santa Cruz for their warm hospitality.
PY - 2002/9
Y1 - 2002/9
N2 - We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm-one-class leveraging-starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach.
AB - We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm-one-class leveraging-starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach.
KW - Boosting
KW - Novelty detection
KW - One-class classification
KW - SVMs
KW - Unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=0036709275&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2002.1033211
DO - 10.1109/TPAMI.2002.1033211
M3 - Article
AN - SCOPUS:0036709275
VL - 24
SP - 1184
EP - 1199
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
SN - 0162-8828
IS - 9
ER -