Label-aligned multi-task feature learning for multimodal classification of Alzheimer’s disease and mild cognitive impairment

Chen Zu, Biao Jie, Mingxia Liu, Songcan Chen, Dinggang Shen, Daoqiang Zhang, Alzheimer’s Disease Neuroimaging Initiative the Alzheimer’s Disease Neuroimaging Initiative

Research output: Contribution to journalArticle

35 Citations (Scopus)

Abstract

Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI.

Original languageEnglish
Pages (from-to)1148-1159
Number of pages12
JournalBrain Imaging and Behavior
Volume10
Issue number4
DOIs
Publication statusPublished - 2016 Dec 1

Keywords

  • Alzheimer’s disease
  • Feature selection
  • Label alignment
  • Mild cognitive impairment
  • Multi-task learning
  • Multimodal classification

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging
  • Neurology
  • Cognitive Neuroscience
  • Clinical Neurology
  • Cellular and Molecular Neuroscience
  • Psychiatry and Mental health
  • Behavioral Neuroscience

Fingerprint Dive into the research topics of 'Label-aligned multi-task feature learning for multimodal classification of Alzheimer’s disease and mild cognitive impairment'. Together they form a unique fingerprint.

  • Cite this

    Zu, C., Jie, B., Liu, M., Chen, S., Shen, D., Zhang, D., & the Alzheimer’s Disease Neuroimaging Initiative, A. D. N. I. (2016). Label-aligned multi-task feature learning for multimodal classification of Alzheimer’s disease and mild cognitive impairment. Brain Imaging and Behavior, 10(4), 1148-1159. https://doi.org/10.1007/s11682-015-9480-7