TY - GEN
T1 - Non-local atlas-guided multi-channel forest learning for human brain labeling
AU - Ma, Guangkai
AU - Gao, Yaozong
AU - Wu, Guorong
AU - Wu, Ligang
AU - Shen, Dinggang
N1 - Publisher Copyright:
© Springer International Publishing Switzerland 2015.
PY - 2015
Y1 - 2015
N2 - Labeling MR brain images into anatomically meaningful regions is important in many quantitative brain researches. In many existing label fusion methods, appearance information is widely used. Meanwhile, recent progress in computer vision suggests that the context feature is very useful in identifying an object from a complex scene. In light of this, we propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). In particular, we employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and the target labels (i.e., corresponding to certain anatomical structures). Moreover, to accommodate the high inter-subject variations, we further extend our learning-based label fusion to a multi-atlas scenario, i.e., we train a random forest for each atlas and then obtain the final labeling result according to the consensus of all atlases. We have comprehensively evaluated our method on both LONI-LBPA40 and IXI datasets, and achieved the highest labeling accuracy, compared to the state-of-the-art methods in the literature.
AB - Labeling MR brain images into anatomically meaningful regions is important in many quantitative brain researches. In many existing label fusion methods, appearance information is widely used. Meanwhile, recent progress in computer vision suggests that the context feature is very useful in identifying an object from a complex scene. In light of this, we propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). In particular, we employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and the target labels (i.e., corresponding to certain anatomical structures). Moreover, to accommodate the high inter-subject variations, we further extend our learning-based label fusion to a multi-atlas scenario, i.e., we train a random forest for each atlas and then obtain the final labeling result according to the consensus of all atlases. We have comprehensively evaluated our method on both LONI-LBPA40 and IXI datasets, and achieved the highest labeling accuracy, compared to the state-of-the-art methods in the literature.
UR - http://www.scopus.com/inward/record.url?scp=84951837197&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-24574-4_86
DO - 10.1007/978-3-319-24574-4_86
M3 - Conference contribution
AN - SCOPUS:84951837197
SN - 9783319245737
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 719
EP - 726
BT - Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 - 18th International Conference, Proceedings
A2 - Frangi, Alejandro F.
A2 - Navab, Nassir
A2 - Hornegger, Joachim
A2 - Wells, William M.
PB - Springer Verlag
T2 - 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2015
Y2 - 5 October 2015 through 9 October 2015
ER -