Deep learning based imaging data completion for improved brain disease diagnosis.

Rongjian Li, Wenlu Zhang, Heung Il Suk, L. Wang, Jiang Li, Dinggang Shen, Shuiwang Ji

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Combining multi-modality brain data for disease diagnosis commonly leads to improved performance. A challenge in using multimodality data is that the data are commonly incomplete; namely, some modality might be missing for some subjects. In this work, we proposed a deep learning based framework for estimating multi-modality imaging data. Our method takes the form of convolutional neural networks, where the input and output are two volumetric modalities. The network contains a large number of trainable parameters that capture the relationship between input and output modalities. When trained on subjects with all modalities, the network can estimate the output modality given the input modality. We evaluated our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where the input and output modalities are MRI and PET images, respectively. Results showed that our method significantly outperformed prior methods.

Original languageEnglish
Title of host publicationMedical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
Pages305-312
Number of pages8
Volume17
EditionPt 3
Publication statusPublished - 2014 Jan 1

Fingerprint

Brain Diseases
Learning
Neuroimaging
Alzheimer Disease
Databases

ASJC Scopus subject areas

  • Medicine(all)

Cite this

Li, R., Zhang, W., Suk, H. I., Wang, L., Li, J., Shen, D., & Ji, S. (2014). Deep learning based imaging data completion for improved brain disease diagnosis. In Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention (Pt 3 ed., Vol. 17, pp. 305-312)

Deep learning based imaging data completion for improved brain disease diagnosis. / Li, Rongjian; Zhang, Wenlu; Suk, Heung Il; Wang, L.; Li, Jiang; Shen, Dinggang; Ji, Shuiwang.

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. Vol. 17 Pt 3. ed. 2014. p. 305-312.

Research output: Chapter in Book/Report/Conference proceedingChapter

Li, R, Zhang, W, Suk, HI, Wang, L, Li, J, Shen, D & Ji, S 2014, Deep learning based imaging data completion for improved brain disease diagnosis. in Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. Pt 3 edn, vol. 17, pp. 305-312.
Li R, Zhang W, Suk HI, Wang L, Li J, Shen D et al. Deep learning based imaging data completion for improved brain disease diagnosis. In Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. Pt 3 ed. Vol. 17. 2014. p. 305-312
Li, Rongjian ; Zhang, Wenlu ; Suk, Heung Il ; Wang, L. ; Li, Jiang ; Shen, Dinggang ; Ji, Shuiwang. / Deep learning based imaging data completion for improved brain disease diagnosis. Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. Vol. 17 Pt 3. ed. 2014. pp. 305-312
@inbook{438cbe5566c2440b828f3742da5c49af,
title = "Deep learning based imaging data completion for improved brain disease diagnosis.",
abstract = "Combining multi-modality brain data for disease diagnosis commonly leads to improved performance. A challenge in using multimodality data is that the data are commonly incomplete; namely, some modality might be missing for some subjects. In this work, we proposed a deep learning based framework for estimating multi-modality imaging data. Our method takes the form of convolutional neural networks, where the input and output are two volumetric modalities. The network contains a large number of trainable parameters that capture the relationship between input and output modalities. When trained on subjects with all modalities, the network can estimate the output modality given the input modality. We evaluated our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where the input and output modalities are MRI and PET images, respectively. Results showed that our method significantly outperformed prior methods.",
author = "Rongjian Li and Wenlu Zhang and Suk, {Heung Il} and L. Wang and Jiang Li and Dinggang Shen and Shuiwang Ji",
year = "2014",
month = "1",
day = "1",
language = "English",
volume = "17",
pages = "305--312",
booktitle = "Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention",
edition = "Pt 3",

}

TY - CHAP

T1 - Deep learning based imaging data completion for improved brain disease diagnosis.

AU - Li, Rongjian

AU - Zhang, Wenlu

AU - Suk, Heung Il

AU - Wang, L.

AU - Li, Jiang

AU - Shen, Dinggang

AU - Ji, Shuiwang

PY - 2014/1/1

Y1 - 2014/1/1

N2 - Combining multi-modality brain data for disease diagnosis commonly leads to improved performance. A challenge in using multimodality data is that the data are commonly incomplete; namely, some modality might be missing for some subjects. In this work, we proposed a deep learning based framework for estimating multi-modality imaging data. Our method takes the form of convolutional neural networks, where the input and output are two volumetric modalities. The network contains a large number of trainable parameters that capture the relationship between input and output modalities. When trained on subjects with all modalities, the network can estimate the output modality given the input modality. We evaluated our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where the input and output modalities are MRI and PET images, respectively. Results showed that our method significantly outperformed prior methods.

AB - Combining multi-modality brain data for disease diagnosis commonly leads to improved performance. A challenge in using multimodality data is that the data are commonly incomplete; namely, some modality might be missing for some subjects. In this work, we proposed a deep learning based framework for estimating multi-modality imaging data. Our method takes the form of convolutional neural networks, where the input and output are two volumetric modalities. The network contains a large number of trainable parameters that capture the relationship between input and output modalities. When trained on subjects with all modalities, the network can estimate the output modality given the input modality. We evaluated our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where the input and output modalities are MRI and PET images, respectively. Results showed that our method significantly outperformed prior methods.

UR - http://www.scopus.com/inward/record.url?scp=84909595477&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84909595477&partnerID=8YFLogxK

M3 - Chapter

C2 - 25320813

VL - 17

SP - 305

EP - 312

BT - Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention

ER -