TY - GEN
T1 - Multi-stage diagnosis of Alzheimer’s disease with incomplete multimodal data via multi-task deep learning
AU - Thung, Kim Han
AU - Yap, Pew Thian
AU - Shen, Dinggang
N1 - Funding Information:
This work was supported in part by NIH grants NS093842, EB006733, EB008374, EB009634, EB022880, AG041721, MH100217, and AG042599.
Publisher Copyright:
© Springer International Publishing AG 2017.
PY - 2017
Y1 - 2017
N2 - Utilization of biomedical data from multiple modalities improves the diagnostic accuracy of neurodegenerative diseases. However, multi-modality data are often incomplete because not all data can be collected for every individual. When using such incomplete data for diagnosis, current approaches for addressing the problem of missing data, such as imputation, matrix completion and multi-task learning, implicitly assume linear data-to-label relationship, therefore limiting their performances. We thus propose multi-task deep learning for incomplete data, where prediction tasks that are associated with different modality combinations are learnt jointly to improve the performance of each task. Specifically, we devise a multi-input multi-output deep learning framework, and train our deep network subnet-wise, partially updating its weights based on the availability of modality data. The experimental results using the ADNI dataset show that our method outperforms the state-of-the-art methods.
AB - Utilization of biomedical data from multiple modalities improves the diagnostic accuracy of neurodegenerative diseases. However, multi-modality data are often incomplete because not all data can be collected for every individual. When using such incomplete data for diagnosis, current approaches for addressing the problem of missing data, such as imputation, matrix completion and multi-task learning, implicitly assume linear data-to-label relationship, therefore limiting their performances. We thus propose multi-task deep learning for incomplete data, where prediction tasks that are associated with different modality combinations are learnt jointly to improve the performance of each task. Specifically, we devise a multi-input multi-output deep learning framework, and train our deep network subnet-wise, partially updating its weights based on the availability of modality data. The experimental results using the ADNI dataset show that our method outperforms the state-of-the-art methods.
UR - http://www.scopus.com/inward/record.url?scp=85029806638&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-67558-9_19
DO - 10.1007/978-3-319-67558-9_19
M3 - Conference contribution
AN - SCOPUS:85029806638
SN - 9783319675572
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 160
EP - 168
BT - Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 3rd International Workshop, DLMIA 2017 and 7th International Workshop, ML-CDS 2017 Held in Conjunction with MICCAI 2017, Proceedings
A2 - Arbel, Tal
A2 - Cardoso, M. Jorge
PB - Springer Verlag
T2 - 3rd International Workshop on Deep Learning in Medical Image Analysis, DLMIA 2017 and 7th International Workshop on Multimodal Learning for Clinical Decision Support, ML-CDS 2017 held in Conjunction with 20th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2017
Y2 - 14 September 2017 through 14 September 2017
ER -