Deep multi-modal latent representation learning for automated dementia diagnosis

Tao Zhou, Mingxia Liu, Huazhu Fu, Jun Wang, Jianbing Shen, Ling Shao, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Effective fusion of multi-modality neuroimaging data, such as structural magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), has attracted increasing interest in computer-aided brain disease diagnosis, by providing complementary structural and functional information of the brain to improve diagnostic performance. Although considerable progress has been made, there remain several significant challenges in traditional methods for fusing multi-modality data. First, the fusion of multi-modality data is usually independent of the training of diagnostic models, leading to sub-optimal performance. Second, it is challenging to effectively exploit the complementary information among multiple modalities based on low-level imaging features (e.g., image intensity or tissue volume). To this end, in this paper, we propose a novel Deep Latent Multi-modality Dementia Diagnosis (DLMD2) framework based on a deep non-negative matrix factorization (NMF) model. Specifically, we integrate the feature fusion/learning process into the classifier construction step for eliminating the gap between neuroimaging features and disease labels. To exploit the correlations among multi-modality data, we learn latent representations for multi-modality data by sharing the common high-level representations in the last layer of each modality in the deep NMF model. Extensive experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset validate that our proposed method outperforms several state-of-the-art methods.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings
EditorsDinggang Shen, Pew-Thian Yap, Tianming Liu, Terry M. Peters, Ali Khan, Lawrence H. Staib, Caroline Essert, Sean Zhou
PublisherSpringer
Pages629-638
Number of pages10
ISBN (Print)9783030322502
DOIs
Publication statusPublished - 2019 Jan 1
Externally publishedYes
Event22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019 - Shenzhen, China
Duration: 2019 Oct 132019 Oct 17

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11767 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
CountryChina
CityShenzhen
Period19/10/1319/10/17

Fingerprint

Neuroimaging
Dementia
Multimodality
Fusion reactions
Factorization
Brain
Imaging techniques
Fusion
Positron emission tomography
Magnetic resonance
Modality
Labels
Classifiers
Tissue
Model Diagnostics
Positron Emission Tomography
Non-negative Matrix Factorization
Alzheimer's Disease
Matrix Factorization
Magnetic Resonance Imaging

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Zhou, T., Liu, M., Fu, H., Wang, J., Shen, J., Shao, L., & Shen, D. (2019). Deep multi-modal latent representation learning for automated dementia diagnosis. In D. Shen, P-T. Yap, T. Liu, T. M. Peters, A. Khan, L. H. Staib, C. Essert, ... S. Zhou (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings (pp. 629-638). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11767 LNCS). Springer. https://doi.org/10.1007/978-3-030-32251-9_69

Deep multi-modal latent representation learning for automated dementia diagnosis. / Zhou, Tao; Liu, Mingxia; Fu, Huazhu; Wang, Jun; Shen, Jianbing; Shao, Ling; Shen, Dinggang.

Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings. ed. / Dinggang Shen; Pew-Thian Yap; Tianming Liu; Terry M. Peters; Ali Khan; Lawrence H. Staib; Caroline Essert; Sean Zhou. Springer, 2019. p. 629-638 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11767 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zhou, T, Liu, M, Fu, H, Wang, J, Shen, J, Shao, L & Shen, D 2019, Deep multi-modal latent representation learning for automated dementia diagnosis. in D Shen, P-T Yap, T Liu, TM Peters, A Khan, LH Staib, C Essert & S Zhou (eds), Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11767 LNCS, Springer, pp. 629-638, 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019, Shenzhen, China, 19/10/13. https://doi.org/10.1007/978-3-030-32251-9_69
Zhou T, Liu M, Fu H, Wang J, Shen J, Shao L et al. Deep multi-modal latent representation learning for automated dementia diagnosis. In Shen D, Yap P-T, Liu T, Peters TM, Khan A, Staib LH, Essert C, Zhou S, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings. Springer. 2019. p. 629-638. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-32251-9_69
Zhou, Tao ; Liu, Mingxia ; Fu, Huazhu ; Wang, Jun ; Shen, Jianbing ; Shao, Ling ; Shen, Dinggang. / Deep multi-modal latent representation learning for automated dementia diagnosis. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings. editor / Dinggang Shen ; Pew-Thian Yap ; Tianming Liu ; Terry M. Peters ; Ali Khan ; Lawrence H. Staib ; Caroline Essert ; Sean Zhou. Springer, 2019. pp. 629-638 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{e6256a17d8934261a2f61a72c352269a,
title = "Deep multi-modal latent representation learning for automated dementia diagnosis",
abstract = "Effective fusion of multi-modality neuroimaging data, such as structural magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), has attracted increasing interest in computer-aided brain disease diagnosis, by providing complementary structural and functional information of the brain to improve diagnostic performance. Although considerable progress has been made, there remain several significant challenges in traditional methods for fusing multi-modality data. First, the fusion of multi-modality data is usually independent of the training of diagnostic models, leading to sub-optimal performance. Second, it is challenging to effectively exploit the complementary information among multiple modalities based on low-level imaging features (e.g., image intensity or tissue volume). To this end, in this paper, we propose a novel Deep Latent Multi-modality Dementia Diagnosis (DLMD2) framework based on a deep non-negative matrix factorization (NMF) model. Specifically, we integrate the feature fusion/learning process into the classifier construction step for eliminating the gap between neuroimaging features and disease labels. To exploit the correlations among multi-modality data, we learn latent representations for multi-modality data by sharing the common high-level representations in the last layer of each modality in the deep NMF model. Extensive experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset validate that our proposed method outperforms several state-of-the-art methods.",
author = "Tao Zhou and Mingxia Liu and Huazhu Fu and Jun Wang and Jianbing Shen and Ling Shao and Dinggang Shen",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-3-030-32251-9_69",
language = "English",
isbn = "9783030322502",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "629--638",
editor = "Dinggang Shen and Pew-Thian Yap and Tianming Liu and Peters, {Terry M.} and Ali Khan and Staib, {Lawrence H.} and Caroline Essert and Sean Zhou",
booktitle = "Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings",

}

TY - GEN

T1 - Deep multi-modal latent representation learning for automated dementia diagnosis

AU - Zhou, Tao

AU - Liu, Mingxia

AU - Fu, Huazhu

AU - Wang, Jun

AU - Shen, Jianbing

AU - Shao, Ling

AU - Shen, Dinggang

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Effective fusion of multi-modality neuroimaging data, such as structural magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), has attracted increasing interest in computer-aided brain disease diagnosis, by providing complementary structural and functional information of the brain to improve diagnostic performance. Although considerable progress has been made, there remain several significant challenges in traditional methods for fusing multi-modality data. First, the fusion of multi-modality data is usually independent of the training of diagnostic models, leading to sub-optimal performance. Second, it is challenging to effectively exploit the complementary information among multiple modalities based on low-level imaging features (e.g., image intensity or tissue volume). To this end, in this paper, we propose a novel Deep Latent Multi-modality Dementia Diagnosis (DLMD2) framework based on a deep non-negative matrix factorization (NMF) model. Specifically, we integrate the feature fusion/learning process into the classifier construction step for eliminating the gap between neuroimaging features and disease labels. To exploit the correlations among multi-modality data, we learn latent representations for multi-modality data by sharing the common high-level representations in the last layer of each modality in the deep NMF model. Extensive experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset validate that our proposed method outperforms several state-of-the-art methods.

AB - Effective fusion of multi-modality neuroimaging data, such as structural magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), has attracted increasing interest in computer-aided brain disease diagnosis, by providing complementary structural and functional information of the brain to improve diagnostic performance. Although considerable progress has been made, there remain several significant challenges in traditional methods for fusing multi-modality data. First, the fusion of multi-modality data is usually independent of the training of diagnostic models, leading to sub-optimal performance. Second, it is challenging to effectively exploit the complementary information among multiple modalities based on low-level imaging features (e.g., image intensity or tissue volume). To this end, in this paper, we propose a novel Deep Latent Multi-modality Dementia Diagnosis (DLMD2) framework based on a deep non-negative matrix factorization (NMF) model. Specifically, we integrate the feature fusion/learning process into the classifier construction step for eliminating the gap between neuroimaging features and disease labels. To exploit the correlations among multi-modality data, we learn latent representations for multi-modality data by sharing the common high-level representations in the last layer of each modality in the deep NMF model. Extensive experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset validate that our proposed method outperforms several state-of-the-art methods.

UR - http://www.scopus.com/inward/record.url?scp=85075690293&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85075690293&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-32251-9_69

DO - 10.1007/978-3-030-32251-9_69

M3 - Conference contribution

AN - SCOPUS:85075690293

SN - 9783030322502

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 629

EP - 638

BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings

A2 - Shen, Dinggang

A2 - Yap, Pew-Thian

A2 - Liu, Tianming

A2 - Peters, Terry M.

A2 - Khan, Ali

A2 - Staib, Lawrence H.

A2 - Essert, Caroline

A2 - Zhou, Sean

PB - Springer

ER -