Robust ensemble learning for data mining

Gunnar Rätsch, Bernhard Schölkopf, Alexander Johannes Smola, Sebastian Mika, Takashi Onoda, Klaus Muller

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

We propose a new boosting algorithm which similaxly to v-Support-Vector Classification allows for the possibility of a pre-specified fraction v of points to lie in the margin area or even on the wrong side of the decision boundary. It gives a nicely interpretable way of controlling the trade-off between minimizing training error and capacity. Furthermore, it can act as a filter for finding and selecting informative patterns from a database.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlag
Pages341-344
Number of pages4
Volume1805
ISBN (Print)3540673822, 9783540673828
Publication statusPublished - 2000
Externally publishedYes
Event4th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2000 - Kyoto, Japan
Duration: 2000 Apr 182000 Apr 20

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume1805
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other4th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2000
CountryJapan
CityKyoto
Period00/4/1800/4/20

Fingerprint

Ensemble Learning
Support Vector
Boosting
Margin
Data mining
Data Mining
Trade-offs
Filter
Training

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Rätsch, G., Schölkopf, B., Smola, A. J., Mika, S., Onoda, T., & Muller, K. (2000). Robust ensemble learning for data mining. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1805, pp. 341-344). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1805). Springer Verlag.

Robust ensemble learning for data mining. / Rätsch, Gunnar; Schölkopf, Bernhard; Smola, Alexander Johannes; Mika, Sebastian; Onoda, Takashi; Muller, Klaus.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 1805 Springer Verlag, 2000. p. 341-344 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1805).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Rätsch, G, Schölkopf, B, Smola, AJ, Mika, S, Onoda, T & Muller, K 2000, Robust ensemble learning for data mining. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 1805, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1805, Springer Verlag, pp. 341-344, 4th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2000, Kyoto, Japan, 00/4/18.
Rätsch G, Schölkopf B, Smola AJ, Mika S, Onoda T, Muller K. Robust ensemble learning for data mining. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 1805. Springer Verlag. 2000. p. 341-344. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Rätsch, Gunnar ; Schölkopf, Bernhard ; Smola, Alexander Johannes ; Mika, Sebastian ; Onoda, Takashi ; Muller, Klaus. / Robust ensemble learning for data mining. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 1805 Springer Verlag, 2000. pp. 341-344 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{3fb5b031c1dc4d91b6d506d581b5f2db,
title = "Robust ensemble learning for data mining",
abstract = "We propose a new boosting algorithm which similaxly to v-Support-Vector Classification allows for the possibility of a pre-specified fraction v of points to lie in the margin area or even on the wrong side of the decision boundary. It gives a nicely interpretable way of controlling the trade-off between minimizing training error and capacity. Furthermore, it can act as a filter for finding and selecting informative patterns from a database.",
author = "Gunnar R{\"a}tsch and Bernhard Sch{\"o}lkopf and Smola, {Alexander Johannes} and Sebastian Mika and Takashi Onoda and Klaus Muller",
year = "2000",
language = "English",
isbn = "3540673822",
volume = "1805",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "341--344",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Robust ensemble learning for data mining

AU - Rätsch, Gunnar

AU - Schölkopf, Bernhard

AU - Smola, Alexander Johannes

AU - Mika, Sebastian

AU - Onoda, Takashi

AU - Muller, Klaus

PY - 2000

Y1 - 2000

N2 - We propose a new boosting algorithm which similaxly to v-Support-Vector Classification allows for the possibility of a pre-specified fraction v of points to lie in the margin area or even on the wrong side of the decision boundary. It gives a nicely interpretable way of controlling the trade-off between minimizing training error and capacity. Furthermore, it can act as a filter for finding and selecting informative patterns from a database.

AB - We propose a new boosting algorithm which similaxly to v-Support-Vector Classification allows for the possibility of a pre-specified fraction v of points to lie in the margin area or even on the wrong side of the decision boundary. It gives a nicely interpretable way of controlling the trade-off between minimizing training error and capacity. Furthermore, it can act as a filter for finding and selecting informative patterns from a database.

UR - http://www.scopus.com/inward/record.url?scp=84869096933&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84869096933&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84869096933

SN - 3540673822

SN - 9783540673828

VL - 1805

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 341

EP - 344

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

PB - Springer Verlag

ER -