TY - JOUR
T1 - Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image
AU - Xiang, Lei
AU - Wang, Qian
AU - Nie, Dong
AU - Zhang, Lichi
AU - Jin, Xiyao
AU - Qiao, Yu
AU - Shen, Dinggang
N1 - Funding Information:
This work was supported by National Key Research and Development Program of China (2017YFC0107600), National Natural Science Foundation of China (61473190, 81471733, 61401271), Science and Technology Commission of Shanghai Municipality (16511101100, 16410722400). This work was also supproted in part by NIH grants (EB006733, CA206100, AG053867).
Publisher Copyright:
© 2018
PY - 2018/7
Y1 - 2018/7
N2 - Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.
AB - Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.
KW - Deep convolutional neural network
KW - Embedding block
KW - Image synthesis
UR - http://www.scopus.com/inward/record.url?scp=85045465902&partnerID=8YFLogxK
U2 - 10.1016/j.media.2018.03.011
DO - 10.1016/j.media.2018.03.011
M3 - Article
C2 - 29674235
AN - SCOPUS:85045465902
VL - 47
SP - 31
EP - 44
JO - Medical Image Analysis
JF - Medical Image Analysis
SN - 1361-8415
ER -