Adversarial learning for mono- or multi-modal registration

Jingfan Fan, Xiaohuan Cao, Qian Wang, Pew Thian Yap, Dinggang Shen

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

This paper introduces an unsupervised adversarial similarity network for image registration. Unlike existing deep learning registration methods, our approach can train a deformable registration network without the need of ground-truth deformations and specific similarity metrics. We connect a registration network and a discrimination network with a deformable transformation layer. The registration network is trained with the feedback from the discrimination network, which is designed to judge whether a pair of registered images are sufficiently similar. Using adversarial training, the registration network is trained to predict deformations that are accurate enough to fool the discrimination network. The proposed method is thus a general registration framework, which can be applied for both mono-modal and multi-modal image registration. Experiments on four brain MRI datasets and a multi-modal pelvic image dataset indicate that our method yields promising registration performance in accuracy, efficiency and generalizability compared with state-of-the-art registration methods, including those based on deep learning.

Original languageEnglish
Article number101545
JournalMedical Image Analysis
Volume58
DOIs
Publication statusPublished - 2019 Dec

Keywords

  • Deformable image registration
  • Fully convolutional neural network
  • Generative adversarial network

ASJC Scopus subject areas

  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging
  • Computer Vision and Pattern Recognition
  • Health Informatics
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Adversarial learning for mono- or multi-modal registration'. Together they form a unique fingerprint.

  • Cite this