TY - GEN
T1 - Joint craniomaxillofacial bone segmentation and landmark digitization by context-guided fully convolutional networks
AU - Zhang, Jun
AU - Liu, Mingxia
AU - Wang, Li
AU - Chen, Si
AU - Yuan, Peng
AU - Li, Jianfu
AU - Shen, Steve Guo Fang
AU - Tang, Zhen
AU - Chen, Ken Chung
AU - Xia, James J.
AU - Shen, Dinggang
PY - 2017
Y1 - 2017
N2 - Generating accurate 3D models from cone-beam computed tomography (CBCT) images is an important step in developing treatment plans for patients with craniomaxillofacial (CMF) deformities. This process often involves bone segmentation and landmark digitization. Since anatomical landmarks generally lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly correlated. However, most existing methods simply treat them as two standalone tasks, without considering their inherent association. In addition, these methods usually ignore the spatial context information (i.e., displacements from voxels to landmarks) in CBCT images. To this end, we propose a context-guided fully convolutional network (FCN) for joint bone segmentation and landmark digitization. Specifically, we first train an FCN to learn the displacement maps to capture the spatial context information in CBCT images. Using the learned displacement maps as guidance information, we further develop a multi-task FCN to jointly perform bone segmentation and landmark digitization. Our method has been evaluated on 107 subjects from two centers, and the experimental results show that our method is superior to the state-of-the-art methods in both bone segmentation and landmark digitization.
AB - Generating accurate 3D models from cone-beam computed tomography (CBCT) images is an important step in developing treatment plans for patients with craniomaxillofacial (CMF) deformities. This process often involves bone segmentation and landmark digitization. Since anatomical landmarks generally lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly correlated. However, most existing methods simply treat them as two standalone tasks, without considering their inherent association. In addition, these methods usually ignore the spatial context information (i.e., displacements from voxels to landmarks) in CBCT images. To this end, we propose a context-guided fully convolutional network (FCN) for joint bone segmentation and landmark digitization. Specifically, we first train an FCN to learn the displacement maps to capture the spatial context information in CBCT images. Using the learned displacement maps as guidance information, we further develop a multi-task FCN to jointly perform bone segmentation and landmark digitization. Our method has been evaluated on 107 subjects from two centers, and the experimental results show that our method is superior to the state-of-the-art methods in both bone segmentation and landmark digitization.
UR - http://www.scopus.com/inward/record.url?scp=85029508832&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85029508832&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-66185-8_81
DO - 10.1007/978-3-319-66185-8_81
M3 - Conference contribution
AN - SCOPUS:85029508832
SN - 9783319661841
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 720
EP - 728
BT - Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 - 20th International Conference, Proceedings
A2 - Jannin, Pierre
A2 - Duchesne, Simon
A2 - Descoteaux, Maxime
A2 - Franz, Alfred
A2 - Collins, D. Louis
A2 - Maier-Hein, Lena
PB - Springer Verlag
T2 - 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2017
Y2 - 11 September 2017 through 13 September 2017
ER -