TY - GEN
T1 - Multi-atlas based segmentation of brainstem nuclei from MR images by deep hyper-graph learning
AU - Dong, Pei
AU - Guo, Yangrong
AU - Gao, Yue
AU - Liang, Peipeng
AU - Shi, Yonghong
AU - Wang, Qian
AU - Shen, Dinggang
AU - Wu, Guorong
PY - 2016
Y1 - 2016
N2 - Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinson’s disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First, we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second, besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third, since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.
AB - Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinson’s disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First, we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second, besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third, since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.
UR - http://www.scopus.com/inward/record.url?scp=84992504884&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84992504884&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-47118-1_7
DO - 10.1007/978-3-319-47118-1_7
M3 - Conference contribution
AN - SCOPUS:84992504884
SN - 9783319471174
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 51
EP - 59
BT - Patch-Based Techniques in Medical Imaging - 2nd International Workshop, Patch-MI 2016 held in conjunction with MICCAI 2016, Proceedings
A2 - Coupe, Pierrick
A2 - Munsell, Brent C.
A2 - Rueckert, Daniel
A2 - Zhan, Yiqiang
A2 - Wu, Guorong
PB - Springer Verlag
T2 - 2nd International Workshop on Patch-Based Techniques in Medical Imaging, Patch-MI 2016 held in conjunction with 19th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2016
Y2 - 17 October 2016 through 17 October 2016
ER -