TY - GEN
T1 - Unpaired deep cross-modality synthesis with fast training
AU - Xiang, Lei
AU - Li, Yang
AU - Lin, Weili
AU - Wang, Qian
AU - Shen, Dinggang
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2018.
PY - 2018
Y1 - 2018
N2 - Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.
AB - Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.
UR - http://www.scopus.com/inward/record.url?scp=85057231018&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-00889-5_18
DO - 10.1007/978-3-030-00889-5_18
M3 - Conference contribution
AN - SCOPUS:85057231018
SN - 9783030008888
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 155
EP - 164
BT - Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 4th International Workshop, DLMIA 2018 and 8th International Workshop, ML-CDS 2018 Held in Conjunction with MICCAI 2018
A2 - Maier-Hein, Lena
A2 - Syeda-Mahmood, Tanveer
A2 - Taylor, Zeike
A2 - Lu, Zhi
A2 - Stoyanov, Danail
A2 - Madabhushi, Anant
A2 - Tavares, João Manuel R.S.
A2 - Nascimento, Jacinto C.
A2 - Moradi, Mehdi
A2 - Martel, Anne
A2 - Papa, Joao Paulo
A2 - Conjeti, Sailesh
A2 - Belagiannis, Vasileios
A2 - Greenspan, Hayit
A2 - Carneiro, Gustavo
A2 - Bradley, Andrew
PB - Springer Verlag
T2 - 4th International Workshop on Deep Learning in Medical Image Analysis, DLMIA 2018 and 8th International Workshop on Multimodal Learning for Clinical Decision Support, ML-CDS 2018 Held in Conjunction with MICCAI 2018
Y2 - 20 September 2018 through 20 September 2018
ER -