Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease

Daoqiang Zhang, Dinggang Shen

Research output: Contribution to journalArticlepeer-review

376 Citations (Scopus)

Abstract

Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). Recently, rather than predicting categorical variables as in classification, several pattern regression methods have also been used to estimate continuous clinical variables from brain images. However, most existing regression methods focus on estimating multiple clinical variables separately and thus cannot utilize the intrinsic useful correlation information among different clinical variables. On the other hand, in those regression methods, only a single modality of data (usually only the structural MRI) is often used, without considering the complementary information that can be provided by different modalities. In this paper, we propose a general methodology, namely multi-modal multi-task (M3T) learning, to jointly predict multiple variables from multi-modal data. Here, the variables include not only the clinical variables used for regression but also the categorical variable used for classification, with different tasks corresponding to prediction of different variables. Specifically, our method contains two key components, i.e., (1) a multi-task feature selection which selects the common subset of relevant features for multiple variables from each modality, and (2) a multi-modal support vector machine which fuses the above-selected features from all modalities to predict multiple (regression and classification) variables. To validate our method, we perform two sets of experiments on ADNI baseline MRI, FDG-PET, and cerebrospinal fluid (CSF) data from 45 AD patients, 91 MCI patients, and 50 healthy controls (HC). In the first set of experiments, we estimate two clinical variables such as Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), as well as one categorical variable (with value of 'AD', 'MCI' or 'HC'), from the baseline MRI, FDG-PET, and CSF data. In the second set of experiments, we predict the 2-year changes of MMSE and ADAS-Cog scores and also the conversion of MCI to AD from the baseline MRI, FDG-PET, and CSF data. The results on both sets of experiments demonstrate that our proposed M3T learning scheme can achieve better performance on both regression and classification tasks than the conventional learning methods.

Original languageEnglish
Pages (from-to)895-907
Number of pages13
JournalNeuroImage
Volume59
Issue number2
DOIs
Publication statusPublished - 2012 Jan 16
Externally publishedYes

Keywords

  • ADAS-Cog
  • Alzheimer's disease (AD)
  • MCI conversion
  • MMSE
  • Multi-modal multi-task (M3T) learning
  • Multi-modality
  • Multi-task feature selection

ASJC Scopus subject areas

  • Neurology
  • Cognitive Neuroscience

Fingerprint Dive into the research topics of 'Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease'. Together they form a unique fingerprint.

Cite this