Joint learning of appearance and transformation for predicting brain MR image registration

Qian Wang, Minjeong Kim, Guorong Wu, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages499-510
Number of pages12
Volume7917 LNCS
DOIs
Publication statusPublished - 2013 Jul 12
Externally publishedYes
Event23rd International Conference on Information Processing in Medical Imaging, IPMI 2013 - Asilomar, CA, United States
Duration: 2013 Jun 282013 Jul 3

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7917 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other23rd International Conference on Information Processing in Medical Imaging, IPMI 2013
CountryUnited States
CityAsilomar, CA
Period13/6/2813/7/3

Fingerprint

Image registration
Voxel
Image Registration
Template
Brain
Correspondence
Registration
Prediction
Sparse Representation
Confidence Level
Multiresolution
Confidence
Learning
Costs
Robustness
Predict
Training
Coefficient

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Wang, Q., Kim, M., Wu, G., & Shen, D. (2013). Joint learning of appearance and transformation for predicting brain MR image registration. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7917 LNCS, pp. 499-510). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7917 LNCS). https://doi.org/10.1007/978-3-642-38868-2_42

Joint learning of appearance and transformation for predicting brain MR image registration. / Wang, Qian; Kim, Minjeong; Wu, Guorong; Shen, Dinggang.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 7917 LNCS 2013. p. 499-510 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7917 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, Q, Kim, M, Wu, G & Shen, D 2013, Joint learning of appearance and transformation for predicting brain MR image registration. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 7917 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 7917 LNCS, pp. 499-510, 23rd International Conference on Information Processing in Medical Imaging, IPMI 2013, Asilomar, CA, United States, 13/6/28. https://doi.org/10.1007/978-3-642-38868-2_42
Wang Q, Kim M, Wu G, Shen D. Joint learning of appearance and transformation for predicting brain MR image registration. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 7917 LNCS. 2013. p. 499-510. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-642-38868-2_42
Wang, Qian ; Kim, Minjeong ; Wu, Guorong ; Shen, Dinggang. / Joint learning of appearance and transformation for predicting brain MR image registration. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 7917 LNCS 2013. pp. 499-510 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{80e178f8e34246d09325e343e47e21ef,
title = "Joint learning of appearance and transformation for predicting brain MR image registration",
abstract = "We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.",
author = "Qian Wang and Minjeong Kim and Guorong Wu and Dinggang Shen",
year = "2013",
month = "7",
day = "12",
doi = "10.1007/978-3-642-38868-2_42",
language = "English",
isbn = "9783642388675",
volume = "7917 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "499--510",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Joint learning of appearance and transformation for predicting brain MR image registration

AU - Wang, Qian

AU - Kim, Minjeong

AU - Wu, Guorong

AU - Shen, Dinggang

PY - 2013/7/12

Y1 - 2013/7/12

N2 - We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.

AB - We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.

UR - http://www.scopus.com/inward/record.url?scp=84879870568&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84879870568&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-38868-2_42

DO - 10.1007/978-3-642-38868-2_42

M3 - Conference contribution

SN - 9783642388675

VL - 7917 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 499

EP - 510

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -