TY - JOUR
T1 - Medical Image Synthesis with Deep Convolutional Adversarial Networks
AU - Nie, Dong
AU - Trullo, Roger
AU - Lian, Jun
AU - Wang, Li
AU - Petitjean, Caroline
AU - Ruan, Su
AU - Wang, Qian
AU - Shen, Dinggang
N1 - Funding Information:
Manuscript received July 31, 2017; revised December 5, 2017 and February 2, 2018; accepted February 25, 2018. Date of publication March 9, 2018; date of current version November 20, 2018. This work was supported in part by the National Institutes of Health under Grant CA206100 for D. Shen; in part by the National Key Research and Development Program of China under Grant 2017YFC0107600; in part by the National Natural Science Foundation of China under Grant 61473190, 81471733; and in part by the Science and Technology Commission of Shanghai Municipality under Grants 16511101100, 16410722400 for Q. Wang. (Dong Nie and Roger Trullo contributed equally to this work.) (Corresponding author: Qian Wang and Dinggang Shen.) D. Nie is with the Department of Computer Science, Department of Radiology and BRIC, UNC-Chapel Hill, Chapel Hill, NC, 27510 USA (e-mail: dongnie@cs.unc.edu).
PY - 2018/12
Y1 - 2018/12
N2 - Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks.
AB - Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks.
KW - Adversarial learning
KW - auto-context model
KW - deep learning
KW - image synthesis
KW - residual learning
UR - http://www.scopus.com/inward/record.url?scp=85043453486&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85043453486&partnerID=8YFLogxK
U2 - 10.1109/TBME.2018.2814538
DO - 10.1109/TBME.2018.2814538
M3 - Article
C2 - 29993445
AN - SCOPUS:85043453486
VL - 65
SP - 2720
EP - 2730
JO - IEEE Transactions on Biomedical Engineering
JF - IEEE Transactions on Biomedical Engineering
SN - 0018-9294
IS - 12
M1 - 8310638
ER -