Hierarchical multi-modal image registration by learning common feature representations

Hongkun Ge, Guorong Wu, Li Wang, Yaozong Gao, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Mutual information (MI) has been widely used for registering images with different modalities. Since most intermodality registration methods simply estimate deformations in a local scale, but optimizing MI from the entire image, the estimated deformations for certain structures could be dominated by the surrounding unrelated structures. Also, since there often exist multiple structures in each image, the intensity correlation between two images could be complex and highly nonlinear, which makes global MI unable to precisely guide local image deformation. To solve these issues, we propose a hierarchical inter-modality registration method by robust feature matching. Specifically, we first select a small set of key points at salient image locations to drive the entire image registration. Since the original image features computed from different modalities are often difficult for direct comparison, we propose to learn their common feature representations by projecting them from their native feature spaces to a common space, where the correlations between corresponding features are maximized. Due to the large heterogeneity between two high-dimension feature distributions, we employ Kernel CCA (Canonical Correlation Analysis) to reveal such non-linear feature mappings. Then, our registration method can take advantage of the learned common features to reliably establish correspondences for key points from different modality images by robust feature matching. As more and more key points take part in the registration, our hierarchical feature-based image registration method can efficiently estimate the deformation pathway between two inter-modality images in a global to local manner. We have applied our proposed registration method to prostate CT and MR images, as well as the infant MR brain images in the first year of life. Experimental results show that our method can achieve more accurate registration results, compared to other state-of-the-art image registration methods.

Original languageEnglish
Title of host publicationMachine Learning in Medical Imaging - 6th International Workshop, MLMI 2015 Held in Conjunction with MICCAI 2015, Proceedings
EditorsLuping Zhou, Yinghuan Shi, Li Wang, Qian Wang
PublisherSpringer Verlag
Pages203-211
Number of pages9
ISBN (Print)9783319248875
DOIs
Publication statusPublished - 2015
Event6th International Workshop on Machine Learning in Medical Imaging, MLMI 2015 and Held in Conjunction with 18th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2015 - Munich, Germany
Duration: 2015 Oct 52015 Oct 5

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9352
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other6th International Workshop on Machine Learning in Medical Imaging, MLMI 2015 and Held in Conjunction with 18th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2015
CountryGermany
CityMunich
Period15/10/515/10/5

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Hierarchical multi-modal image registration by learning common feature representations'. Together they form a unique fingerprint.

  • Cite this

    Ge, H., Wu, G., Wang, L., Gao, Y., & Shen, D. (2015). Hierarchical multi-modal image registration by learning common feature representations. In L. Zhou, Y. Shi, L. Wang, & Q. Wang (Eds.), Machine Learning in Medical Imaging - 6th International Workshop, MLMI 2015 Held in Conjunction with MICCAI 2015, Proceedings (pp. 203-211). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9352). Springer Verlag. https://doi.org/10.1007/978-3-319-24888-2_25