Deep multi-modal latent representation learning for automated dementia diagnosis

Tao Zhou, Mingxia Liu, Huazhu Fu, Jun Wang, Jianbing Shen, Ling Shao, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Effective fusion of multi-modality neuroimaging data, such as structural magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), has attracted increasing interest in computer-aided brain disease diagnosis, by providing complementary structural and functional information of the brain to improve diagnostic performance. Although considerable progress has been made, there remain several significant challenges in traditional methods for fusing multi-modality data. First, the fusion of multi-modality data is usually independent of the training of diagnostic models, leading to sub-optimal performance. Second, it is challenging to effectively exploit the complementary information among multiple modalities based on low-level imaging features (e.g., image intensity or tissue volume). To this end, in this paper, we propose a novel Deep Latent Multi-modality Dementia Diagnosis (DLMD2) framework based on a deep non-negative matrix factorization (NMF) model. Specifically, we integrate the feature fusion/learning process into the classifier construction step for eliminating the gap between neuroimaging features and disease labels. To exploit the correlations among multi-modality data, we learn latent representations for multi-modality data by sharing the common high-level representations in the last layer of each modality in the deep NMF model. Extensive experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset validate that our proposed method outperforms several state-of-the-art methods.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings
EditorsDinggang Shen, Pew-Thian Yap, Tianming Liu, Terry M. Peters, Ali Khan, Lawrence H. Staib, Caroline Essert, Sean Zhou
PublisherSpringer
Pages629-638
Number of pages10
ISBN (Print)9783030322502
DOIs
Publication statusPublished - 2019
Externally publishedYes
Event22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019 - Shenzhen, China
Duration: 2019 Oct 132019 Oct 17

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11767 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
CountryChina
CityShenzhen
Period19/10/1319/10/17

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Deep multi-modal latent representation learning for automated dementia diagnosis'. Together they form a unique fingerprint.

  • Cite this

    Zhou, T., Liu, M., Fu, H., Wang, J., Shen, J., Shao, L., & Shen, D. (2019). Deep multi-modal latent representation learning for automated dementia diagnosis. In D. Shen, P-T. Yap, T. Liu, T. M. Peters, A. Khan, L. H. Staib, C. Essert, & S. Zhou (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings (pp. 629-638). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11767 LNCS). Springer. https://doi.org/10.1007/978-3-030-32251-9_69