Joint learning of appearance and transformation for predicting brain MR image registration.

Qian Wang, Minjeong Kim, Guorong Wu, Dinggang Shen

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.

Original languageEnglish
Pages (from-to)499-510
Number of pages12
JournalInformation processing in medical imaging : proceedings of the ... conference
Volume23
Publication statusPublished - 2013 Jan 1

Fingerprint

Joints
Learning
Brain
Costs and Cost Analysis

ASJC Scopus subject areas

  • Medicine(all)

Cite this

Joint learning of appearance and transformation for predicting brain MR image registration. / Wang, Qian; Kim, Minjeong; Wu, Guorong; Shen, Dinggang.

In: Information processing in medical imaging : proceedings of the ... conference, Vol. 23, 01.01.2013, p. 499-510.

Research output: Contribution to journalArticle

@article{b8a3a631539f402589212780f642dc79,
title = "Joint learning of appearance and transformation for predicting brain MR image registration.",
abstract = "We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.",
author = "Qian Wang and Minjeong Kim and Guorong Wu and Dinggang Shen",
year = "2013",
month = "1",
day = "1",
language = "English",
volume = "23",
pages = "499--510",
journal = "Information processing in medical imaging : proceedings of the ... conference",
issn = "1011-2499",
publisher = "Springer Verlag",

}

TY - JOUR

T1 - Joint learning of appearance and transformation for predicting brain MR image registration.

AU - Wang, Qian

AU - Kim, Minjeong

AU - Wu, Guorong

AU - Shen, Dinggang

PY - 2013/1/1

Y1 - 2013/1/1

N2 - We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.

AB - We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.

UR - http://www.scopus.com/inward/record.url?scp=84901270559&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84901270559&partnerID=8YFLogxK

M3 - Article

C2 - 24683994

AN - SCOPUS:84901270559

VL - 23

SP - 499

EP - 510

JO - Information processing in medical imaging : proceedings of the ... conference

JF - Information processing in medical imaging : proceedings of the ... conference

SN - 1011-2499

ER -