TY - JOUR
T1 - Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis
AU - Alzheimer's Disease Neuroimaging Initiative
AU - Suk, Heung Il
AU - Lee, Seong Whan
AU - Shen, Dinggang
N1 - Funding Information:
This work was supported in part by NIH grants EB006733 , EB008374 , EB009634 , AG041721 , MH100217 , and AG042599 , and also by the National Research Foundation grant (No. 2012-005741 ) funded by the Korean Government .
Publisher Copyright:
© 2014 Elsevier Inc.
PY - 2014/11/1
Y1 - 2014/11/1
N2 - For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM). 22Although it is clear from the context that the acronym DBM denotes "Deep Boltzmann Machine" in this paper, we would clearly indicate that DBM here is not related to "Deformation Based Morphometry"., a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.
AB - For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM). 22Although it is clear from the context that the acronym DBM denotes "Deep Boltzmann Machine" in this paper, we would clearly indicate that DBM here is not related to "Deformation Based Morphometry"., a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.
KW - Alzheimer's Disease
KW - Deep boltzmann machine
KW - Mild cognitive impairment
KW - Multimodal data fusion
KW - Shared feature representation
UR - http://www.scopus.com/inward/record.url?scp=84907019192&partnerID=8YFLogxK
U2 - 10.1016/j.neuroimage.2014.06.077
DO - 10.1016/j.neuroimage.2014.06.077
M3 - Article
C2 - 25042445
AN - SCOPUS:84907019192
VL - 101
SP - 569
EP - 582
JO - NeuroImage
JF - NeuroImage
SN - 1053-8119
ER -