Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis

Alzheimer's Disease Neuroimaging Initiative

Research output: Contribution to journalArticlepeer-review

427 Citations (Scopus)

Abstract

For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM). 22Although it is clear from the context that the acronym DBM denotes "Deep Boltzmann Machine" in this paper, we would clearly indicate that DBM here is not related to "Deformation Based Morphometry"., a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.

Original languageEnglish
Pages (from-to)569-582
Number of pages14
JournalNeuroImage
Volume101
DOIs
Publication statusPublished - 2014 Nov 1

Keywords

  • Alzheimer's Disease
  • Deep boltzmann machine
  • Mild cognitive impairment
  • Multimodal data fusion
  • Shared feature representation

ASJC Scopus subject areas

  • Neurology
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis'. Together they form a unique fingerprint.

Cite this