3D auto-context-based locality adaptive multi-modality GANs for PET synthesis

Yan Wang, Luping Zhou, Biting Yu, Lei Wang, Chen Zu, David S. Lalush, Weili Lin, Xi Wu, Jiliu Zhou, Dinggang Shen

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1×1×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.

Original languageEnglish
JournalIEEE Transactions on Medical Imaging
DOIs
Publication statusAccepted/In press - 2018 Jan 1

Fingerprint

Positron emission tomography
Positron-Emission Tomography
Fusion reactions
Radiation
Health risks
Magnetic resonance imaging
Dosimetry
Health

Keywords

  • Generative adversarial networks (GANs)
  • Image synthesis
  • locality adaptive fusion
  • multi-modality
  • Positron emission topography (PET)

ASJC Scopus subject areas

  • Software
  • Radiological and Ultrasound Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this

3D auto-context-based locality adaptive multi-modality GANs for PET synthesis. / Wang, Yan; Zhou, Luping; Yu, Biting; Wang, Lei; Zu, Chen; Lalush, David S.; Lin, Weili; Wu, Xi; Zhou, Jiliu; Shen, Dinggang.

In: IEEE Transactions on Medical Imaging, 01.01.2018.

Research output: Contribution to journalArticle

Wang, Yan ; Zhou, Luping ; Yu, Biting ; Wang, Lei ; Zu, Chen ; Lalush, David S. ; Lin, Weili ; Wu, Xi ; Zhou, Jiliu ; Shen, Dinggang. / 3D auto-context-based locality adaptive multi-modality GANs for PET synthesis. In: IEEE Transactions on Medical Imaging. 2018.
@article{5217d4b904a940de97ec5a62e6544bc7,
title = "3D auto-context-based locality adaptive multi-modality GANs for PET synthesis",
abstract = "Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1×1×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.",
keywords = "Generative adversarial networks (GANs), Image synthesis, locality adaptive fusion, multi-modality, Positron emission topography (PET)",
author = "Yan Wang and Luping Zhou and Biting Yu and Lei Wang and Chen Zu and Lalush, {David S.} and Weili Lin and Xi Wu and Jiliu Zhou and Dinggang Shen",
year = "2018",
month = "1",
day = "1",
doi = "10.1109/TMI.2018.2884053",
language = "English",
journal = "IEEE Transactions on Medical Imaging",
issn = "0278-0062",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - 3D auto-context-based locality adaptive multi-modality GANs for PET synthesis

AU - Wang, Yan

AU - Zhou, Luping

AU - Yu, Biting

AU - Wang, Lei

AU - Zu, Chen

AU - Lalush, David S.

AU - Lin, Weili

AU - Wu, Xi

AU - Zhou, Jiliu

AU - Shen, Dinggang

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1×1×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.

AB - Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1×1×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.

KW - Generative adversarial networks (GANs)

KW - Image synthesis

KW - locality adaptive fusion

KW - multi-modality

KW - Positron emission topography (PET)

UR - http://www.scopus.com/inward/record.url?scp=85057882665&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85057882665&partnerID=8YFLogxK

U2 - 10.1109/TMI.2018.2884053

DO - 10.1109/TMI.2018.2884053

M3 - Article

C2 - 30507527

AN - SCOPUS:85057882665

JO - IEEE Transactions on Medical Imaging

JF - IEEE Transactions on Medical Imaging

SN - 0278-0062

ER -