Feature learning and fusion of multimodality neuroimaging and genetic data for multi-status dementia diagnosis

Tao Zhou, Kim Han Thung, Xiaofeng Zhu, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

In this paper, we aim to maximally utilize multimodality neuroimaging and genetic data to predict Alzheimer’s disease (AD) and its prodromal status, i.e., a multi-status dementia diagnosis problem. Multimodality neuroimaging data such as MRI and PET provide valuable insights to abnormalities, and genetic data such as Single Nucleotide Polymorphism (SNP) provide information about a patient’s AD risk factors. When used in conjunction, AD diagnosis may be improved. However, these data are heterogeneous (e.g., having different data distributions), and have different number of samples (e.g., PET data is having far less number of samples than the numbers of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework, where the deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combination of modalities, via effective training using maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity between modalities can be better addressed and then combined in the next stage. In the second stage, we learn the joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. We have tested our framework on Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset for multi-status AD diagnosis, and the experimental results show that the proposed framework outperforms other methods.

Original languageEnglish
Title of host publicationMachine Learning in Medical Imaging - 8th International Workshop, MLMI 2017, Held in Conjunction with MICCAI 2017, Proceedings
EditorsYinghuan Shi, Heung-Il Suk, Kenji Suzuki, Qian Wang
PublisherSpringer Verlag
Pages132-140
Number of pages9
ISBN (Print)9783319673882
DOIs
Publication statusPublished - 2017
Externally publishedYes
Event8th International Workshop on Machine Learning in Medical Imaging, MLMI 2017 held in conjunction with the 20th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2017 - Quebec City, Canada
Duration: 2017 Sep 102017 Sep 10

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10541 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other8th International Workshop on Machine Learning in Medical Imaging, MLMI 2017 held in conjunction with the 20th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2017
CountryCanada
CityQuebec City
Period17/9/1017/9/10

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Feature learning and fusion of multimodality neuroimaging and genetic data for multi-status dementia diagnosis'. Together they form a unique fingerprint.

  • Cite this

    Zhou, T., Thung, K. H., Zhu, X., & Shen, D. (2017). Feature learning and fusion of multimodality neuroimaging and genetic data for multi-status dementia diagnosis. In Y. Shi, H-I. Suk, K. Suzuki, & Q. Wang (Eds.), Machine Learning in Medical Imaging - 8th International Workshop, MLMI 2017, Held in Conjunction with MICCAI 2017, Proceedings (pp. 132-140). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10541 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-319-67389-9_16