### Abstract

Pairwise proximity data, given as similarity or dissimilarity matrix, can violate metricity. This occurs either due to noise, fallible estimates, or due to intrinsic non-metric features such as they arise from human judgments. So far the problem of non-metric pairwise data has been tackled by essentially omitting the negative eigenvalues or shifting the spectrum of the associated (pseudo-)covariance matrix for a subsequent embedding. However, little attention has been paid to the negative part of the spectrum itself. In particular no answer was given to whether the directions associated to the negative eigenvalues would at all code variance other than noise related. We show by a simple, exploratory analysis that the negative eigenvalues can code for relevant structure in the data, thus leading to the discovery of new features, which were lost by conventional data analysis techniques. The information hidden in the negative eigenvalue part of the spectrum is illustrated and discussed for three data sets, namely USPS handwritten digits, text-mining and data from cognitive psychology.

Original language | English |
---|---|

Pages (from-to) | 801-818 |

Number of pages | 18 |

Journal | Journal of Machine Learning Research |

Volume | 5 |

Publication status | Published - 2004 Jul 1 |

Externally published | Yes |

### Fingerprint

### Keywords

- Embedding
- Exploratory data analysis
- Feature discovery
- Non-metric
- Pairwise data
- Unsupervised learning

### ASJC Scopus subject areas

- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability

### Cite this

*Journal of Machine Learning Research*,

*5*, 801-818.

**Feature discovery in non-metric pairwise data.** / Laub, Julian; Muller, Klaus.

Research output: Contribution to journal › Article

*Journal of Machine Learning Research*, vol. 5, pp. 801-818.

}

TY - JOUR

T1 - Feature discovery in non-metric pairwise data

AU - Laub, Julian

AU - Muller, Klaus

PY - 2004/7/1

Y1 - 2004/7/1

N2 - Pairwise proximity data, given as similarity or dissimilarity matrix, can violate metricity. This occurs either due to noise, fallible estimates, or due to intrinsic non-metric features such as they arise from human judgments. So far the problem of non-metric pairwise data has been tackled by essentially omitting the negative eigenvalues or shifting the spectrum of the associated (pseudo-)covariance matrix for a subsequent embedding. However, little attention has been paid to the negative part of the spectrum itself. In particular no answer was given to whether the directions associated to the negative eigenvalues would at all code variance other than noise related. We show by a simple, exploratory analysis that the negative eigenvalues can code for relevant structure in the data, thus leading to the discovery of new features, which were lost by conventional data analysis techniques. The information hidden in the negative eigenvalue part of the spectrum is illustrated and discussed for three data sets, namely USPS handwritten digits, text-mining and data from cognitive psychology.

AB - Pairwise proximity data, given as similarity or dissimilarity matrix, can violate metricity. This occurs either due to noise, fallible estimates, or due to intrinsic non-metric features such as they arise from human judgments. So far the problem of non-metric pairwise data has been tackled by essentially omitting the negative eigenvalues or shifting the spectrum of the associated (pseudo-)covariance matrix for a subsequent embedding. However, little attention has been paid to the negative part of the spectrum itself. In particular no answer was given to whether the directions associated to the negative eigenvalues would at all code variance other than noise related. We show by a simple, exploratory analysis that the negative eigenvalues can code for relevant structure in the data, thus leading to the discovery of new features, which were lost by conventional data analysis techniques. The information hidden in the negative eigenvalue part of the spectrum is illustrated and discussed for three data sets, namely USPS handwritten digits, text-mining and data from cognitive psychology.

KW - Embedding

KW - Exploratory data analysis

KW - Feature discovery

KW - Non-metric

KW - Pairwise data

KW - Unsupervised learning

UR - http://www.scopus.com/inward/record.url?scp=33745428531&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33745428531&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:33745428531

VL - 5

SP - 801

EP - 818

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -