Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis

Tao Zhou, Kim Han Thung, Xiaofeng Zhu, Dinggang Shen

Research output: Contribution to journalArticle

21 Citations (Scopus)

Abstract

In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework, where deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high-level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state-of-the-art methods.

Original languageEnglish
JournalHuman Brain Mapping
DOIs
Publication statusAccepted/In press - 2018 Jan 1

Fingerprint

Dementia
Alzheimer Disease
Learning
Neuroimaging
Single Nucleotide Polymorphism
Joints
Brain
Datasets

Keywords

  • Alzheimer's disease (AD)
  • deep learning
  • mild cognitive impairment (MCI)
  • multimodality data fusion

ASJC Scopus subject areas

  • Anatomy
  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging
  • Neurology
  • Clinical Neurology

Cite this

Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. / Zhou, Tao; Thung, Kim Han; Zhu, Xiaofeng; Shen, Dinggang.

In: Human Brain Mapping, 01.01.2018.

Research output: Contribution to journalArticle

@article{6589def62c2244f3a43412421b83f070,
title = "Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis",
abstract = "In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework, where deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high-level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state-of-the-art methods.",
keywords = "Alzheimer's disease (AD), deep learning, mild cognitive impairment (MCI), multimodality data fusion",
author = "Tao Zhou and Thung, {Kim Han} and Xiaofeng Zhu and Dinggang Shen",
year = "2018",
month = "1",
day = "1",
doi = "10.1002/hbm.24428",
language = "English",
journal = "Human Brain Mapping",
issn = "1065-9471",
publisher = "Wiley-Liss Inc.",

}

TY - JOUR

T1 - Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis

AU - Zhou, Tao

AU - Thung, Kim Han

AU - Zhu, Xiaofeng

AU - Shen, Dinggang

PY - 2018/1/1

Y1 - 2018/1/1

N2 - In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework, where deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high-level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state-of-the-art methods.

AB - In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework, where deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high-level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state-of-the-art methods.

KW - Alzheimer's disease (AD)

KW - deep learning

KW - mild cognitive impairment (MCI)

KW - multimodality data fusion

UR - http://www.scopus.com/inward/record.url?scp=85055750965&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85055750965&partnerID=8YFLogxK

U2 - 10.1002/hbm.24428

DO - 10.1002/hbm.24428

M3 - Article

C2 - 30381863

AN - SCOPUS:85055750965

JO - Human Brain Mapping

JF - Human Brain Mapping

SN - 1065-9471

ER -