TY - JOUR
T1 - An effective MR-Guided CT network training for segmenting prostate in CT images
AU - Yang, Wanqi
AU - Shi, Yinghuan
AU - Park, Sang Hyun
AU - Yang, Ming
AU - Gao, Yang
AU - Shen, Dinggang
N1 - Funding Information:
Manuscript received June 25, 2019; revised December 3, 2019; accepted December 12, 2019. Date of publication December 16, 2019; date of current version August 5, 2020. This work was supported in part by the National Natural Science Foundation of China under Grants 61603193, 61673203, 61876087, and 61432008, in part by the Jiangsu Natural Science Foundation under Grant BK20171479, in part by the Fundamental Research Funds for the Central Universities under Grant 020214380056, and in part by the Institute for Information and Communications Technology Promotion (IITP) grant funded by the Korea government (2019-0-01557). (Corresponding authors: Wanqi Yang; Ding-gang Shen.) W. Yang and M. Yang are with the School of Computer Science and Technology, Nanjing Normal University, Nanjing 210046 China (e-mail: yangwq@njnu.edu.cn; myang@njnu.edu.cn).
Publisher Copyright:
© 2013 IEEE.
PY - 2020/8
Y1 - 2020/8
N2 - Segmentation of prostate in medical imaging data (e.g., CT, MRI, TRUS) is often considered as a critical yet challenging task for radiotherapy treatment. It is relatively easier to segment prostate from MR images than from CT images, due to better soft tissue contrast of the MR images. For segmenting prostate from CT images, most previous methods mainly used CT alone, and thus their performances are often limited by low tissue contrast in the CT images. In this article, we explore the possibility of using indirect guidance from MR images for improving prostate segmentation in the CT images. In particular, we propose a novel deep transfer learning approach, i.e., MR-guided CT network training (namely MICS-NET), which can employ MR images to help better learning of features in CT images for prostate segmentation. In MICS-NET, the guidance from MRI consists of two steps: (1) learning informative and transferable features from MRI and then transferring them to CT images in a cascade manner, and (2) adaptively transferring the prostate likelihood of MRI model (i.e., well-trained convnet by purely using MR images) with a view consistency constraint. To illustrate the effectiveness of our approach, we evaluate MICS-NET on a real CT prostate image set, with the manual delineations available as the ground truth for evaluation. Our methods generate promising segmentation results which achieve (1) six percentages higher Dice Ratio than the CT model purely using CT images and (2) comparable performance with the MRI model purely using MR images.
AB - Segmentation of prostate in medical imaging data (e.g., CT, MRI, TRUS) is often considered as a critical yet challenging task for radiotherapy treatment. It is relatively easier to segment prostate from MR images than from CT images, due to better soft tissue contrast of the MR images. For segmenting prostate from CT images, most previous methods mainly used CT alone, and thus their performances are often limited by low tissue contrast in the CT images. In this article, we explore the possibility of using indirect guidance from MR images for improving prostate segmentation in the CT images. In particular, we propose a novel deep transfer learning approach, i.e., MR-guided CT network training (namely MICS-NET), which can employ MR images to help better learning of features in CT images for prostate segmentation. In MICS-NET, the guidance from MRI consists of two steps: (1) learning informative and transferable features from MRI and then transferring them to CT images in a cascade manner, and (2) adaptively transferring the prostate likelihood of MRI model (i.e., well-trained convnet by purely using MR images) with a view consistency constraint. To illustrate the effectiveness of our approach, we evaluate MICS-NET on a real CT prostate image set, with the manual delineations available as the ground truth for evaluation. Our methods generate promising segmentation results which achieve (1) six percentages higher Dice Ratio than the CT model purely using CT images and (2) comparable performance with the MRI model purely using MR images.
KW - Prostate segmentation
KW - cascade learning
KW - deep transfer learning
KW - fully convolutional network
KW - view consistency constraint
UR - http://www.scopus.com/inward/record.url?scp=85089202587&partnerID=8YFLogxK
U2 - 10.1109/JBHI.2019.2960153
DO - 10.1109/JBHI.2019.2960153
M3 - Article
C2 - 31841426
AN - SCOPUS:85089202587
SN - 2168-2194
VL - 24
SP - 2278
EP - 2291
JO - IEEE Journal of Biomedical and Health Informatics
JF - IEEE Journal of Biomedical and Health Informatics
IS - 8
M1 - 8933421
ER -