A two-stream symmetric network with bidirectional ensemble for aerial image matching

Jae Hyun Park, Woo Jeoung Nam, Seong Whan Lee

Research output: Contribution to journalArticle

Abstract

In this paper, we propose a novel method to precisely match two aerial images that were obtained in different environments via a two-stream deep network. By internally augmenting the target image, the network considers the two-stream with the three input images and reflects the additional augmented pair in the training. As a result, the training process of the deep network is regularized and the network becomes robust for the variance of aerial images. Furthermore, we introduce an ensemble method that is based on the bidirectional network, which is motivated by the isomorphic nature of the geometric transformation. We obtain two global transformation parameters without any additional network or parameters, which alleviate asymmetric matching results and enable significant improvement in performance by fusing two outcomes. For the experiment, we adopt aerial images from Google Earth and the International Society for Photogrammetry and Remote Sensing (ISPRS). To quantitatively assess our result, we apply the probability of correct keypoints (PCK) metric, which measures the degree of matching. The qualitative and quantitative results show the sizable gap of performance compared to the conventional methods for matching the aerial images. All code and our trained model, as well as the dataset are available online.

Original languageEnglish
Article number465
JournalRemote Sensing
Volume12
Issue number3
DOIs
Publication statusPublished - 2020 Feb 1

Keywords

  • Aerial image
  • End-to-end trainable network
  • Ensemble
  • Gemetric transformation
  • Image matching
  • Image registration

ASJC Scopus subject areas

  • Earth and Planetary Sciences(all)

Fingerprint Dive into the research topics of 'A two-stream symmetric network with bidirectional ensemble for aerial image matching'. Together they form a unique fingerprint.

  • Cite this