Deep adversarial learning for multi-modality missing data completion

Lei Cai, Zhengyang Wang, Hongyang Gao, Dinggang Shen, Shuiwang Ji

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

Multi-modality data are widely used in clinical applications, such as tumor detection and brain disease diagnosis. Different modalities can usually provide complementary information, which commonly leads to improved performance. However, some modalities are commonly missing for some subjects due to various technical and practical reasons. As a result, multi-modality data are usually incomplete, raising the multi-modality missing data completion problem. In this work, we formulate the problem as a conditional image generation task and propose an encoder-decoder deep neural network to tackle this problem. Specifically, the model takes the existing modality as input and generates the missing modality. By employing an auxiliary adversarial loss, our model is able to generate high-quality missing modality images. At the same time, we propose to incorporate the available category information of subjects in training to enable the model to generate more informative images. We evaluate our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where positron emission tomography (PET) modalities are missing. Experimental results show that the trained network can generate high-quality PET modalities based on existing magnetic resonance imaging (MRI) modalities, and provide complementary information to improve the detection and tracking of the Alzheimer's disease. Our results also show that the proposed methods generate higher quality images than baseline methods as measured by various image quality statistics.

Original languageEnglish
Title of host publicationKDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
PublisherAssociation for Computing Machinery
Pages1158-1166
Number of pages9
ISBN (Print)9781450355520
DOIs
Publication statusPublished - 2018 Jul 19
Externally publishedYes
Event24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2018 - London, United Kingdom
Duration: 2018 Aug 192018 Aug 23

Other

Other24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2018
CountryUnited Kingdom
CityLondon
Period18/8/1918/8/23

Fingerprint

Positron emission tomography
Image quality
Neuroimaging
Magnetic resonance
Tumors
Brain
Statistics
Imaging techniques
Deep learning
Deep neural networks

Keywords

  • Adversarial loss function
  • Deep learning
  • Disease diagnosis
  • Missing data completion

ASJC Scopus subject areas

  • Software
  • Information Systems

Cite this

Cai, L., Wang, Z., Gao, H., Shen, D., & Ji, S. (2018). Deep adversarial learning for multi-modality missing data completion. In KDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1158-1166). Association for Computing Machinery. https://doi.org/10.1145/3219819.3219963

Deep adversarial learning for multi-modality missing data completion. / Cai, Lei; Wang, Zhengyang; Gao, Hongyang; Shen, Dinggang; Ji, Shuiwang.

KDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery, 2018. p. 1158-1166.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Cai, L, Wang, Z, Gao, H, Shen, D & Ji, S 2018, Deep adversarial learning for multi-modality missing data completion. in KDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery, pp. 1158-1166, 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2018, London, United Kingdom, 18/8/19. https://doi.org/10.1145/3219819.3219963
Cai L, Wang Z, Gao H, Shen D, Ji S. Deep adversarial learning for multi-modality missing data completion. In KDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery. 2018. p. 1158-1166 https://doi.org/10.1145/3219819.3219963
Cai, Lei ; Wang, Zhengyang ; Gao, Hongyang ; Shen, Dinggang ; Ji, Shuiwang. / Deep adversarial learning for multi-modality missing data completion. KDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery, 2018. pp. 1158-1166
@inproceedings{c4ac55e46c4e47318400ef865ec4ff64,
title = "Deep adversarial learning for multi-modality missing data completion",
abstract = "Multi-modality data are widely used in clinical applications, such as tumor detection and brain disease diagnosis. Different modalities can usually provide complementary information, which commonly leads to improved performance. However, some modalities are commonly missing for some subjects due to various technical and practical reasons. As a result, multi-modality data are usually incomplete, raising the multi-modality missing data completion problem. In this work, we formulate the problem as a conditional image generation task and propose an encoder-decoder deep neural network to tackle this problem. Specifically, the model takes the existing modality as input and generates the missing modality. By employing an auxiliary adversarial loss, our model is able to generate high-quality missing modality images. At the same time, we propose to incorporate the available category information of subjects in training to enable the model to generate more informative images. We evaluate our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where positron emission tomography (PET) modalities are missing. Experimental results show that the trained network can generate high-quality PET modalities based on existing magnetic resonance imaging (MRI) modalities, and provide complementary information to improve the detection and tracking of the Alzheimer's disease. Our results also show that the proposed methods generate higher quality images than baseline methods as measured by various image quality statistics.",
keywords = "Adversarial loss function, Deep learning, Disease diagnosis, Missing data completion",
author = "Lei Cai and Zhengyang Wang and Hongyang Gao and Dinggang Shen and Shuiwang Ji",
year = "2018",
month = "7",
day = "19",
doi = "10.1145/3219819.3219963",
language = "English",
isbn = "9781450355520",
pages = "1158--1166",
booktitle = "KDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
publisher = "Association for Computing Machinery",

}

TY - GEN

T1 - Deep adversarial learning for multi-modality missing data completion

AU - Cai, Lei

AU - Wang, Zhengyang

AU - Gao, Hongyang

AU - Shen, Dinggang

AU - Ji, Shuiwang

PY - 2018/7/19

Y1 - 2018/7/19

N2 - Multi-modality data are widely used in clinical applications, such as tumor detection and brain disease diagnosis. Different modalities can usually provide complementary information, which commonly leads to improved performance. However, some modalities are commonly missing for some subjects due to various technical and practical reasons. As a result, multi-modality data are usually incomplete, raising the multi-modality missing data completion problem. In this work, we formulate the problem as a conditional image generation task and propose an encoder-decoder deep neural network to tackle this problem. Specifically, the model takes the existing modality as input and generates the missing modality. By employing an auxiliary adversarial loss, our model is able to generate high-quality missing modality images. At the same time, we propose to incorporate the available category information of subjects in training to enable the model to generate more informative images. We evaluate our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where positron emission tomography (PET) modalities are missing. Experimental results show that the trained network can generate high-quality PET modalities based on existing magnetic resonance imaging (MRI) modalities, and provide complementary information to improve the detection and tracking of the Alzheimer's disease. Our results also show that the proposed methods generate higher quality images than baseline methods as measured by various image quality statistics.

AB - Multi-modality data are widely used in clinical applications, such as tumor detection and brain disease diagnosis. Different modalities can usually provide complementary information, which commonly leads to improved performance. However, some modalities are commonly missing for some subjects due to various technical and practical reasons. As a result, multi-modality data are usually incomplete, raising the multi-modality missing data completion problem. In this work, we formulate the problem as a conditional image generation task and propose an encoder-decoder deep neural network to tackle this problem. Specifically, the model takes the existing modality as input and generates the missing modality. By employing an auxiliary adversarial loss, our model is able to generate high-quality missing modality images. At the same time, we propose to incorporate the available category information of subjects in training to enable the model to generate more informative images. We evaluate our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where positron emission tomography (PET) modalities are missing. Experimental results show that the trained network can generate high-quality PET modalities based on existing magnetic resonance imaging (MRI) modalities, and provide complementary information to improve the detection and tracking of the Alzheimer's disease. Our results also show that the proposed methods generate higher quality images than baseline methods as measured by various image quality statistics.

KW - Adversarial loss function

KW - Deep learning

KW - Disease diagnosis

KW - Missing data completion

UR - http://www.scopus.com/inward/record.url?scp=85051485343&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051485343&partnerID=8YFLogxK

U2 - 10.1145/3219819.3219963

DO - 10.1145/3219819.3219963

M3 - Conference contribution

AN - SCOPUS:85051485343

SN - 9781450355520

SP - 1158

EP - 1166

BT - KDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

PB - Association for Computing Machinery

ER -