Locality adaptive multi-modality GANs for high-quality PET image synthesis

Yan Wang, Luping Zhou, Lei Wang, Biting Yu, Chen Zu, David S. Lalush, Weili Lin, Xi Wu, Jiliu Zhou, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Positron emission topography (PET) has been substantially used in recent years. To minimize the potential health risks caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality full-dose PET image from the low-dose one to reduce the radiation exposure while maintaining the image quality. In this paper, we propose a locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. This paper has the following contributions. First, we propose a new mechanism to fuse multi-modality information in deep neural networks. Different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolute the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not appropriate. To address this issue, we propose a method that is locality adaptive for multi-modality fusion. Second, to learn this locality adaptive fusion, we utilize 1 × 1 × 1 kernel so that the number of additional parameters incurred by our method is kept minimum. This also naturally produces a fused image which acts as a pseudo input for the subsequent learning stages. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in an end-to-end trained 3D conditional GANs model developed by us. Our 3D GANs model generates high quality PET images by employing large-sized image patches and hierarchical features. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings
EditorsJulia A. Schnabel, Christos Davatzikos, Carlos Alberola-López, Gabor Fichtinger, Alejandro F. Frangi
PublisherSpringer Verlag
Pages329-337
Number of pages9
ISBN (Print)9783030009274
DOIs
Publication statusPublished - 2018 Jan 1
Externally publishedYes
Event21st International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2018 - Granada, Spain
Duration: 2018 Sep 162018 Sep 20

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11070 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other21st International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2018
CountrySpain
CityGranada
Period18/9/1618/9/20

Fingerprint

Multimodality
Positrons
Topography
Locality
Synthesis
Fusion reactions
Dose
Fusion
kernel
3D Model
Modality
Radiation
Health risks
Electric fuses
Magnetic resonance imaging
Image quality
Dosimetry
Conditional Model
Generative Models
Image Quality

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Wang, Y., Zhou, L., Wang, L., Yu, B., Zu, C., Lalush, D. S., ... Shen, D. (2018). Locality adaptive multi-modality GANs for high-quality PET image synthesis. In J. A. Schnabel, C. Davatzikos, C. Alberola-López, G. Fichtinger, & A. F. Frangi (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings (pp. 329-337). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11070 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-00928-1_38

Locality adaptive multi-modality GANs for high-quality PET image synthesis. / Wang, Yan; Zhou, Luping; Wang, Lei; Yu, Biting; Zu, Chen; Lalush, David S.; Lin, Weili; Wu, Xi; Zhou, Jiliu; Shen, Dinggang.

Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings. ed. / Julia A. Schnabel; Christos Davatzikos; Carlos Alberola-López; Gabor Fichtinger; Alejandro F. Frangi. Springer Verlag, 2018. p. 329-337 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11070 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, Y, Zhou, L, Wang, L, Yu, B, Zu, C, Lalush, DS, Lin, W, Wu, X, Zhou, J & Shen, D 2018, Locality adaptive multi-modality GANs for high-quality PET image synthesis. in JA Schnabel, C Davatzikos, C Alberola-López, G Fichtinger & AF Frangi (eds), Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11070 LNCS, Springer Verlag, pp. 329-337, 21st International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2018, Granada, Spain, 18/9/16. https://doi.org/10.1007/978-3-030-00928-1_38
Wang Y, Zhou L, Wang L, Yu B, Zu C, Lalush DS et al. Locality adaptive multi-modality GANs for high-quality PET image synthesis. In Schnabel JA, Davatzikos C, Alberola-López C, Fichtinger G, Frangi AF, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings. Springer Verlag. 2018. p. 329-337. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-00928-1_38
Wang, Yan ; Zhou, Luping ; Wang, Lei ; Yu, Biting ; Zu, Chen ; Lalush, David S. ; Lin, Weili ; Wu, Xi ; Zhou, Jiliu ; Shen, Dinggang. / Locality adaptive multi-modality GANs for high-quality PET image synthesis. Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings. editor / Julia A. Schnabel ; Christos Davatzikos ; Carlos Alberola-López ; Gabor Fichtinger ; Alejandro F. Frangi. Springer Verlag, 2018. pp. 329-337 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{6e78fc084ed340db9cf9e89ff629d4c2,
title = "Locality adaptive multi-modality GANs for high-quality PET image synthesis",
abstract = "Positron emission topography (PET) has been substantially used in recent years. To minimize the potential health risks caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality full-dose PET image from the low-dose one to reduce the radiation exposure while maintaining the image quality. In this paper, we propose a locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. This paper has the following contributions. First, we propose a new mechanism to fuse multi-modality information in deep neural networks. Different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolute the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not appropriate. To address this issue, we propose a method that is locality adaptive for multi-modality fusion. Second, to learn this locality adaptive fusion, we utilize 1 × 1 × 1 kernel so that the number of additional parameters incurred by our method is kept minimum. This also naturally produces a fused image which acts as a pseudo input for the subsequent learning stages. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in an end-to-end trained 3D conditional GANs model developed by us. Our 3D GANs model generates high quality PET images by employing large-sized image patches and hierarchical features. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.",
author = "Yan Wang and Luping Zhou and Lei Wang and Biting Yu and Chen Zu and Lalush, {David S.} and Weili Lin and Xi Wu and Jiliu Zhou and Dinggang Shen",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-3-030-00928-1_38",
language = "English",
isbn = "9783030009274",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "329--337",
editor = "Schnabel, {Julia A.} and Christos Davatzikos and Carlos Alberola-L{\'o}pez and Gabor Fichtinger and Frangi, {Alejandro F.}",
booktitle = "Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings",

}

TY - GEN

T1 - Locality adaptive multi-modality GANs for high-quality PET image synthesis

AU - Wang, Yan

AU - Zhou, Luping

AU - Wang, Lei

AU - Yu, Biting

AU - Zu, Chen

AU - Lalush, David S.

AU - Lin, Weili

AU - Wu, Xi

AU - Zhou, Jiliu

AU - Shen, Dinggang

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Positron emission topography (PET) has been substantially used in recent years. To minimize the potential health risks caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality full-dose PET image from the low-dose one to reduce the radiation exposure while maintaining the image quality. In this paper, we propose a locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. This paper has the following contributions. First, we propose a new mechanism to fuse multi-modality information in deep neural networks. Different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolute the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not appropriate. To address this issue, we propose a method that is locality adaptive for multi-modality fusion. Second, to learn this locality adaptive fusion, we utilize 1 × 1 × 1 kernel so that the number of additional parameters incurred by our method is kept minimum. This also naturally produces a fused image which acts as a pseudo input for the subsequent learning stages. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in an end-to-end trained 3D conditional GANs model developed by us. Our 3D GANs model generates high quality PET images by employing large-sized image patches and hierarchical features. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.

AB - Positron emission topography (PET) has been substantially used in recent years. To minimize the potential health risks caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality full-dose PET image from the low-dose one to reduce the radiation exposure while maintaining the image quality. In this paper, we propose a locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. This paper has the following contributions. First, we propose a new mechanism to fuse multi-modality information in deep neural networks. Different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolute the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not appropriate. To address this issue, we propose a method that is locality adaptive for multi-modality fusion. Second, to learn this locality adaptive fusion, we utilize 1 × 1 × 1 kernel so that the number of additional parameters incurred by our method is kept minimum. This also naturally produces a fused image which acts as a pseudo input for the subsequent learning stages. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in an end-to-end trained 3D conditional GANs model developed by us. Our 3D GANs model generates high quality PET images by employing large-sized image patches and hierarchical features. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.

UR - http://www.scopus.com/inward/record.url?scp=85054055593&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85054055593&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-00928-1_38

DO - 10.1007/978-3-030-00928-1_38

M3 - Conference contribution

AN - SCOPUS:85054055593

SN - 9783030009274

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 329

EP - 337

BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings

A2 - Schnabel, Julia A.

A2 - Davatzikos, Christos

A2 - Alberola-López, Carlos

A2 - Fichtinger, Gabor

A2 - Frangi, Alejandro F.

PB - Springer Verlag

ER -