Feature extraction for one-class classification

David M J Tax, Klaus Muller

Research output: Contribution to journalArticle

21 Citations (Scopus)

Abstract

Feature reduction is often an essential part of solving a classification task. One common approach for doing this, is Principal Component Analysis. There the low variance directions in the data are removed and the high variance directions are retained. It is hoped that these high variance directions contain information about the class differences. For one-class classification or novelty detection, the classification task contains one ill-determined class, for which (almost) no information is available. In this paper we show that for one-class classification, the low-variance directions are most informative, and that in the feature reduction a bias-variance trade-off has to be considered which causes that retaining the high variance directions is often not optimal.

Original languageEnglish
Pages (from-to)342-349
Number of pages8
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume2714
Publication statusPublished - 2003 Dec 1
Externally publishedYes

Fingerprint

One-class Classification
Feature Extraction
Feature extraction
Novelty Detection
Principal component analysis
Principal Component Analysis
Direction compound
Trade-offs

ASJC Scopus subject areas

  • Computer Science(all)
  • Biochemistry, Genetics and Molecular Biology(all)
  • Theoretical Computer Science

Cite this

Feature extraction for one-class classification. / Tax, David M J; Muller, Klaus.

In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 2714, 01.12.2003, p. 342-349.

Research output: Contribution to journalArticle

@article{cd91b32e3a8440998d6839d5deb43a12,
title = "Feature extraction for one-class classification",
abstract = "Feature reduction is often an essential part of solving a classification task. One common approach for doing this, is Principal Component Analysis. There the low variance directions in the data are removed and the high variance directions are retained. It is hoped that these high variance directions contain information about the class differences. For one-class classification or novelty detection, the classification task contains one ill-determined class, for which (almost) no information is available. In this paper we show that for one-class classification, the low-variance directions are most informative, and that in the feature reduction a bias-variance trade-off has to be considered which causes that retaining the high variance directions is often not optimal.",
author = "Tax, {David M J} and Klaus Muller",
year = "2003",
month = "12",
day = "1",
language = "English",
volume = "2714",
pages = "342--349",
journal = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
issn = "0302-9743",
publisher = "Springer Verlag",

}

TY - JOUR

T1 - Feature extraction for one-class classification

AU - Tax, David M J

AU - Muller, Klaus

PY - 2003/12/1

Y1 - 2003/12/1

N2 - Feature reduction is often an essential part of solving a classification task. One common approach for doing this, is Principal Component Analysis. There the low variance directions in the data are removed and the high variance directions are retained. It is hoped that these high variance directions contain information about the class differences. For one-class classification or novelty detection, the classification task contains one ill-determined class, for which (almost) no information is available. In this paper we show that for one-class classification, the low-variance directions are most informative, and that in the feature reduction a bias-variance trade-off has to be considered which causes that retaining the high variance directions is often not optimal.

AB - Feature reduction is often an essential part of solving a classification task. One common approach for doing this, is Principal Component Analysis. There the low variance directions in the data are removed and the high variance directions are retained. It is hoped that these high variance directions contain information about the class differences. For one-class classification or novelty detection, the classification task contains one ill-determined class, for which (almost) no information is available. In this paper we show that for one-class classification, the low-variance directions are most informative, and that in the feature reduction a bias-variance trade-off has to be considered which causes that retaining the high variance directions is often not optimal.

UR - http://www.scopus.com/inward/record.url?scp=35248863825&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=35248863825&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:35248863825

VL - 2714

SP - 342

EP - 349

JO - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

JF - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SN - 0302-9743

ER -