TY - GEN
T1 - Synthesizing missing PET from MRI with cycle-consistent generative adversarial networks for Alzheimer’s disease diagnosis
AU - Pan, Yongsheng
AU - Liu, Mingxia
AU - Lian, Chunfeng
AU - Zhou, Tao
AU - Xia, Yong
AU - Shen, Dinggang
N1 - Funding Information:
This research was supported in part by the National Natural Science Foundation of China under Grants 61771397 and 61471297, in part by Innovation Foundation for Doctor Dissertation of NPU under Grants CX201835, and in part by NIH grants EB008374, AG041721, AG042599, EB022880. Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and DOD ADNI.
Funding Information:
Acknowledgment. This research was supported in part by the National Natural Science Foundation of China under Grants 61771397 and 61471297, in part by Innovation Foundation for Doctor Dissertation of NPU under Grants CX201835, and in part by NIH grants EB008374, AG041721, AG042599, EB022880. Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and DOD ADNI.
Publisher Copyright:
© Springer Nature Switzerland AG 2018.
PY - 2018
Y1 - 2018
N2 - Multi-modal neuroimages (e.g., MRI and PET) have been widely used for diagnosis of brain diseases such as Alzheimer’s disease (AD) by providing complementary information. However, in practice, it is unavoidable to have missing data, i.e., missing PET data for many subjects in the ADNI dataset. A straightforward strategy to tackle this challenge is to simply discard subjects with missing PET, but this will significantly reduce the number of training subjects for learning reliable diagnostic models. On the other hand, since different modalities (i.e., MRI and PET) were acquired from the same subject, there often exist underlying relevance between different modalities. Accordingly, we propose a two-stage deep learning framework for AD diagnosis using both MRI and PET data. Specifically, in the first stage, we impute missing PET data based on their corresponding MRI data by using 3D Cycle-consistent Generative Adversarial Networks (3D-cGAN) to capture their underlying relationship. In the second stage, with the complete MRI and PET (i.e., after imputation for the case of missing PET), we develop a deep multi-instance neural network for AD diagnosis and also mild cognitive impairment (MCI) conversion prediction. Experimental results on subjects from ADNI demonstrate that our synthesized PET images with 3D-cGAN are reasonable, and also our two-stage deep learning method outperforms the state-of-the-art methods in AD diagnosis.
AB - Multi-modal neuroimages (e.g., MRI and PET) have been widely used for diagnosis of brain diseases such as Alzheimer’s disease (AD) by providing complementary information. However, in practice, it is unavoidable to have missing data, i.e., missing PET data for many subjects in the ADNI dataset. A straightforward strategy to tackle this challenge is to simply discard subjects with missing PET, but this will significantly reduce the number of training subjects for learning reliable diagnostic models. On the other hand, since different modalities (i.e., MRI and PET) were acquired from the same subject, there often exist underlying relevance between different modalities. Accordingly, we propose a two-stage deep learning framework for AD diagnosis using both MRI and PET data. Specifically, in the first stage, we impute missing PET data based on their corresponding MRI data by using 3D Cycle-consistent Generative Adversarial Networks (3D-cGAN) to capture their underlying relationship. In the second stage, with the complete MRI and PET (i.e., after imputation for the case of missing PET), we develop a deep multi-instance neural network for AD diagnosis and also mild cognitive impairment (MCI) conversion prediction. Experimental results on subjects from ADNI demonstrate that our synthesized PET images with 3D-cGAN are reasonable, and also our two-stage deep learning method outperforms the state-of-the-art methods in AD diagnosis.
UR - http://www.scopus.com/inward/record.url?scp=85053889491&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-00931-1_52
DO - 10.1007/978-3-030-00931-1_52
M3 - Conference contribution
AN - SCOPUS:85053889491
SN - 9783030009304
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 455
EP - 463
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings
A2 - Frangi, Alejandro F.
A2 - Davatzikos, Christos
A2 - Fichtinger, Gabor
A2 - Alberola-López, Carlos
A2 - Schnabel, Julia A.
PB - Springer Verlag
T2 - 21st International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2018
Y2 - 16 September 2018 through 20 September 2018
ER -