Joint patch clustering-based dictionary learning for multimodal image fusion

Minjae Kim, David K. Han, Hanseok Ko

Research output: Contribution to journalArticle

63 Citations (Scopus)

Abstract

Constructing a good dictionary is the key to a successful image fusion technique in sparsity-based models. An efficient dictionary learning method based on a joint patch clustering is proposed for multimodal image fusion. To construct an over-complete dictionary to ensure sufficient number of useful atoms for representing a fused image, which conveys image information from different sensor modalities, all patches from different source images are clustered together with their structural similarities. For constructing a compact but informative dictionary, only a few principal components that effectively describe each of joint patch clusters are selected and combined to form the over-complete dictionary. Finally, sparse coefficients are estimated by a simultaneous orthogonal matching pursuit algorithm to represent multimodal images with the common dictionary learned by the proposed method. The experimental results with various pairs of source images validate effectiveness of the proposed method for image fusion task.

Original languageEnglish
Pages (from-to)198-214
Number of pages17
JournalInformation Fusion
Volume27
DOIs
Publication statusPublished - 2016 Jan 1

Fingerprint

Image fusion
Glossaries
Atoms
Sensors

Keywords

  • Clustering
  • Dictionary learning
  • K-SVD
  • Multimodal image fusion
  • Sparse representation

ASJC Scopus subject areas

  • Signal Processing
  • Software
  • Hardware and Architecture
  • Information Systems

Cite this

Joint patch clustering-based dictionary learning for multimodal image fusion. / Kim, Minjae; Han, David K.; Ko, Hanseok.

In: Information Fusion, Vol. 27, 01.01.2016, p. 198-214.

Research output: Contribution to journalArticle

@article{56506b354b5940f09788c4313c3af890,
title = "Joint patch clustering-based dictionary learning for multimodal image fusion",
abstract = "Constructing a good dictionary is the key to a successful image fusion technique in sparsity-based models. An efficient dictionary learning method based on a joint patch clustering is proposed for multimodal image fusion. To construct an over-complete dictionary to ensure sufficient number of useful atoms for representing a fused image, which conveys image information from different sensor modalities, all patches from different source images are clustered together with their structural similarities. For constructing a compact but informative dictionary, only a few principal components that effectively describe each of joint patch clusters are selected and combined to form the over-complete dictionary. Finally, sparse coefficients are estimated by a simultaneous orthogonal matching pursuit algorithm to represent multimodal images with the common dictionary learned by the proposed method. The experimental results with various pairs of source images validate effectiveness of the proposed method for image fusion task.",
keywords = "Clustering, Dictionary learning, K-SVD, Multimodal image fusion, Sparse representation",
author = "Minjae Kim and Han, {David K.} and Hanseok Ko",
year = "2016",
month = "1",
day = "1",
doi = "10.1016/j.inffus.2015.03.003",
language = "English",
volume = "27",
pages = "198--214",
journal = "Information Fusion",
issn = "1566-2535",
publisher = "Elsevier",

}

TY - JOUR

T1 - Joint patch clustering-based dictionary learning for multimodal image fusion

AU - Kim, Minjae

AU - Han, David K.

AU - Ko, Hanseok

PY - 2016/1/1

Y1 - 2016/1/1

N2 - Constructing a good dictionary is the key to a successful image fusion technique in sparsity-based models. An efficient dictionary learning method based on a joint patch clustering is proposed for multimodal image fusion. To construct an over-complete dictionary to ensure sufficient number of useful atoms for representing a fused image, which conveys image information from different sensor modalities, all patches from different source images are clustered together with their structural similarities. For constructing a compact but informative dictionary, only a few principal components that effectively describe each of joint patch clusters are selected and combined to form the over-complete dictionary. Finally, sparse coefficients are estimated by a simultaneous orthogonal matching pursuit algorithm to represent multimodal images with the common dictionary learned by the proposed method. The experimental results with various pairs of source images validate effectiveness of the proposed method for image fusion task.

AB - Constructing a good dictionary is the key to a successful image fusion technique in sparsity-based models. An efficient dictionary learning method based on a joint patch clustering is proposed for multimodal image fusion. To construct an over-complete dictionary to ensure sufficient number of useful atoms for representing a fused image, which conveys image information from different sensor modalities, all patches from different source images are clustered together with their structural similarities. For constructing a compact but informative dictionary, only a few principal components that effectively describe each of joint patch clusters are selected and combined to form the over-complete dictionary. Finally, sparse coefficients are estimated by a simultaneous orthogonal matching pursuit algorithm to represent multimodal images with the common dictionary learned by the proposed method. The experimental results with various pairs of source images validate effectiveness of the proposed method for image fusion task.

KW - Clustering

KW - Dictionary learning

KW - K-SVD

KW - Multimodal image fusion

KW - Sparse representation

UR - http://www.scopus.com/inward/record.url?scp=84938200080&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84938200080&partnerID=8YFLogxK

U2 - 10.1016/j.inffus.2015.03.003

DO - 10.1016/j.inffus.2015.03.003

M3 - Article

AN - SCOPUS:84938200080

VL - 27

SP - 198

EP - 214

JO - Information Fusion

JF - Information Fusion

SN - 1566-2535

ER -