Unpaired deep cross-modality synthesis with fast training

Lei Xiang, Yang Li, Weili Lin, Qian Wang, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.

Original languageEnglish
Title of host publicationDeep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018
EditorsLena Maier-Hein, Tanveer Syeda-Mahmood, Zeike Taylor, Zhi Lu, Danail Stoyanov, Anant Madabhushi, João Manuel R.S. Tavares, Jacinto C. Nascimento, Mehdi Moradi, Anne Martel, Joao Paulo Papa, Sailesh Conjeti, Vasileios Belagiannis, Hayit Greenspan, Gustavo Carneiro, Andrew Bradley
PublisherSpringer Verlag
Pages155-164
Number of pages10
ISBN (Print)9783030008888
DOIs
Publication statusPublished - 2018 Jan 1
Externally publishedYes
Event4th International Workshop on Deep Learning in Medical Image Analysis, DLMIA 2018 and 8th International Workshop on Multimodal Learning for Clinical Decision Support, ML-CDS 2018 Held in Conjunction with MICCAI 2018 - Granada, Spain
Duration: 2018 Sep 202018 Sep 20

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11045 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other4th International Workshop on Deep Learning in Medical Image Analysis, DLMIA 2018 and 8th International Workshop on Multimodal Learning for Clinical Decision Support, ML-CDS 2018 Held in Conjunction with MICCAI 2018
CountrySpain
CityGranada
Period18/9/2018/9/20

Fingerprint

Modality
Synthesis
Brain
Paired Data
Misalignment
Anatomy
Dissimilarity
Training
Large Data Sets
Convert
Motion
Output
Experimental Results
Demonstrate

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Xiang, L., Li, Y., Lin, W., Wang, Q., & Shen, D. (2018). Unpaired deep cross-modality synthesis with fast training. In L. Maier-Hein, T. Syeda-Mahmood, Z. Taylor, Z. Lu, D. Stoyanov, A. Madabhushi, J. M. R. S. Tavares, J. C. Nascimento, M. Moradi, A. Martel, J. P. Papa, S. Conjeti, V. Belagiannis, H. Greenspan, G. Carneiro, ... A. Bradley (Eds.), Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018 (pp. 155-164). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11045 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-00889-5_18

Unpaired deep cross-modality synthesis with fast training. / Xiang, Lei; Li, Yang; Lin, Weili; Wang, Qian; Shen, Dinggang.

Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018. ed. / Lena Maier-Hein; Tanveer Syeda-Mahmood; Zeike Taylor; Zhi Lu; Danail Stoyanov; Anant Madabhushi; João Manuel R.S. Tavares; Jacinto C. Nascimento; Mehdi Moradi; Anne Martel; Joao Paulo Papa; Sailesh Conjeti; Vasileios Belagiannis; Hayit Greenspan; Gustavo Carneiro; Andrew Bradley. Springer Verlag, 2018. p. 155-164 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11045 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Xiang, L, Li, Y, Lin, W, Wang, Q & Shen, D 2018, Unpaired deep cross-modality synthesis with fast training. in L Maier-Hein, T Syeda-Mahmood, Z Taylor, Z Lu, D Stoyanov, A Madabhushi, JMRS Tavares, JC Nascimento, M Moradi, A Martel, JP Papa, S Conjeti, V Belagiannis, H Greenspan, G Carneiro & A Bradley (eds), Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11045 LNCS, Springer Verlag, pp. 155-164, 4th International Workshop on Deep Learning in Medical Image Analysis, DLMIA 2018 and 8th International Workshop on Multimodal Learning for Clinical Decision Support, ML-CDS 2018 Held in Conjunction with MICCAI 2018, Granada, Spain, 18/9/20. https://doi.org/10.1007/978-3-030-00889-5_18
Xiang L, Li Y, Lin W, Wang Q, Shen D. Unpaired deep cross-modality synthesis with fast training. In Maier-Hein L, Syeda-Mahmood T, Taylor Z, Lu Z, Stoyanov D, Madabhushi A, Tavares JMRS, Nascimento JC, Moradi M, Martel A, Papa JP, Conjeti S, Belagiannis V, Greenspan H, Carneiro G, Bradley A, editors, Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018. Springer Verlag. 2018. p. 155-164. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-00889-5_18
Xiang, Lei ; Li, Yang ; Lin, Weili ; Wang, Qian ; Shen, Dinggang. / Unpaired deep cross-modality synthesis with fast training. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018. editor / Lena Maier-Hein ; Tanveer Syeda-Mahmood ; Zeike Taylor ; Zhi Lu ; Danail Stoyanov ; Anant Madabhushi ; João Manuel R.S. Tavares ; Jacinto C. Nascimento ; Mehdi Moradi ; Anne Martel ; Joao Paulo Papa ; Sailesh Conjeti ; Vasileios Belagiannis ; Hayit Greenspan ; Gustavo Carneiro ; Andrew Bradley. Springer Verlag, 2018. pp. 155-164 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{63be0f35310c411ab54ac21ab30fb6df,
title = "Unpaired deep cross-modality synthesis with fast training",
abstract = "Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.",
author = "Lei Xiang and Yang Li and Weili Lin and Qian Wang and Dinggang Shen",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-3-030-00889-5_18",
language = "English",
isbn = "9783030008888",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "155--164",
editor = "Lena Maier-Hein and Tanveer Syeda-Mahmood and Zeike Taylor and Zhi Lu and Danail Stoyanov and Anant Madabhushi and Tavares, {Jo{\~a}o Manuel R.S.} and Nascimento, {Jacinto C.} and Mehdi Moradi and Anne Martel and Papa, {Joao Paulo} and Sailesh Conjeti and Vasileios Belagiannis and Hayit Greenspan and Gustavo Carneiro and Andrew Bradley",
booktitle = "Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018",

}

TY - GEN

T1 - Unpaired deep cross-modality synthesis with fast training

AU - Xiang, Lei

AU - Li, Yang

AU - Lin, Weili

AU - Wang, Qian

AU - Shen, Dinggang

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.

AB - Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.

UR - http://www.scopus.com/inward/record.url?scp=85057231018&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85057231018&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-00889-5_18

DO - 10.1007/978-3-030-00889-5_18

M3 - Conference contribution

AN - SCOPUS:85057231018

SN - 9783030008888

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 155

EP - 164

BT - Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018

A2 - Maier-Hein, Lena

A2 - Syeda-Mahmood, Tanveer

A2 - Taylor, Zeike

A2 - Lu, Zhi

A2 - Stoyanov, Danail

A2 - Madabhushi, Anant

A2 - Tavares, João Manuel R.S.

A2 - Nascimento, Jacinto C.

A2 - Moradi, Mehdi

A2 - Martel, Anne

A2 - Papa, Joao Paulo

A2 - Conjeti, Sailesh

A2 - Belagiannis, Vasileios

A2 - Greenspan, Hayit

A2 - Carneiro, Gustavo

A2 - Bradley, Andrew

PB - Springer Verlag

ER -