Robust multi-atlas label propagation by deep sparse representation

Chen Zu, Zhengxia Wang, Daoqiang Zhang, Peipeng Liang, Yonghong Shi, Dinggang Shen, Guorong Wu

Research output: Contribution to journalArticle

16 Citations (Scopus)

Abstract

Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.

Original languageEnglish
Pages (from-to)511-517
Number of pages7
JournalPattern Recognition
Volume63
DOIs
Publication statusPublished - 2017 Mar 1

Fingerprint

Labels
Glossaries
Fusion reactions
Medical imaging
Labeling

Keywords

  • Hierarchical sparse representation
  • Multi-atlas segmentation
  • Patch-based label fusion

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Cite this

Robust multi-atlas label propagation by deep sparse representation. / Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong.

In: Pattern Recognition, Vol. 63, 01.03.2017, p. 511-517.

Research output: Contribution to journalArticle

Zu, Chen ; Wang, Zhengxia ; Zhang, Daoqiang ; Liang, Peipeng ; Shi, Yonghong ; Shen, Dinggang ; Wu, Guorong. / Robust multi-atlas label propagation by deep sparse representation. In: Pattern Recognition. 2017 ; Vol. 63. pp. 511-517.
@article{68587c8ad9494701a423234f9a068cf8,
title = "Robust multi-atlas label propagation by deep sparse representation",
abstract = "Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.",
keywords = "Hierarchical sparse representation, Multi-atlas segmentation, Patch-based label fusion",
author = "Chen Zu and Zhengxia Wang and Daoqiang Zhang and Peipeng Liang and Yonghong Shi and Dinggang Shen and Guorong Wu",
year = "2017",
month = "3",
day = "1",
doi = "10.1016/j.patcog.2016.09.028",
language = "English",
volume = "63",
pages = "511--517",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier Limited",

}

TY - JOUR

T1 - Robust multi-atlas label propagation by deep sparse representation

AU - Zu, Chen

AU - Wang, Zhengxia

AU - Zhang, Daoqiang

AU - Liang, Peipeng

AU - Shi, Yonghong

AU - Shen, Dinggang

AU - Wu, Guorong

PY - 2017/3/1

Y1 - 2017/3/1

N2 - Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.

AB - Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.

KW - Hierarchical sparse representation

KW - Multi-atlas segmentation

KW - Patch-based label fusion

UR - http://www.scopus.com/inward/record.url?scp=84999040479&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84999040479&partnerID=8YFLogxK

U2 - 10.1016/j.patcog.2016.09.028

DO - 10.1016/j.patcog.2016.09.028

M3 - Article

AN - SCOPUS:84999040479

VL - 63

SP - 511

EP - 517

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

ER -