Deep Leaning Based Multi-Modal Fusion for Fast MR Reconstruction

Lei Xiang, Yong Chen, Weitang Chang, Yiqiang Zhan, Weili Lin, Qian Wang, Dinggang Shen

Research output: Contribution to journalArticle

5 Citations (Scopus)

Abstract

T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k-space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully-sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we present a novel deep learning approach, namely Dense-Unet, to accomplish the reconstruction task. The proposed Dense-Unet requires fewer parameters and less computation, while achieving promising performance. Our results have shown that Dense-Unet can reconstruct a 3D T2WI volume in less than 10 seconds with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio (SNR) loss. Experiments also demonstrate excellent transferring capability of Dense-Unet when applied to the datasets acquired by different MR scanners. The above results imply great potential of our method in many clinical scenarios.

Original languageEnglish
JournalIEEE Transactions on Biomedical Engineering
DOIs
Publication statusAccepted/In press - 2018 Jan 1
Externally publishedYes

Fingerprint

Magnetic resonance
Magnetic Resonance Spectroscopy
Fusion reactions
Artifacts
Image quality
Learning
Computer-Assisted Image Processing
Electric fuses
Image reconstruction
Noise
Sampling
Imaging techniques
Research
Experiments
Deep learning

Keywords

  • Deep learning
  • Dense Block
  • Fast MR Reconstruction
  • Multi-Model Fusion

ASJC Scopus subject areas

  • Biomedical Engineering

Cite this

Deep Leaning Based Multi-Modal Fusion for Fast MR Reconstruction. / Xiang, Lei; Chen, Yong; Chang, Weitang; Zhan, Yiqiang; Lin, Weili; Wang, Qian; Shen, Dinggang.

In: IEEE Transactions on Biomedical Engineering, 01.01.2018.

Research output: Contribution to journalArticle

Xiang, Lei ; Chen, Yong ; Chang, Weitang ; Zhan, Yiqiang ; Lin, Weili ; Wang, Qian ; Shen, Dinggang. / Deep Leaning Based Multi-Modal Fusion for Fast MR Reconstruction. In: IEEE Transactions on Biomedical Engineering. 2018.
@article{d4821ca09dc74ffca4731793d7e8ac1a,
title = "Deep Leaning Based Multi-Modal Fusion for Fast MR Reconstruction",
abstract = "T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k-space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully-sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we present a novel deep learning approach, namely Dense-Unet, to accomplish the reconstruction task. The proposed Dense-Unet requires fewer parameters and less computation, while achieving promising performance. Our results have shown that Dense-Unet can reconstruct a 3D T2WI volume in less than 10 seconds with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio (SNR) loss. Experiments also demonstrate excellent transferring capability of Dense-Unet when applied to the datasets acquired by different MR scanners. The above results imply great potential of our method in many clinical scenarios.",
keywords = "Deep learning, Dense Block, Fast MR Reconstruction, Multi-Model Fusion",
author = "Lei Xiang and Yong Chen and Weitang Chang and Yiqiang Zhan and Weili Lin and Qian Wang and Dinggang Shen",
year = "2018",
month = "1",
day = "1",
doi = "10.1109/TBME.2018.2883958",
language = "English",
journal = "IEEE Transactions on Biomedical Engineering",
issn = "0018-9294",
publisher = "IEEE Computer Society",

}

TY - JOUR

T1 - Deep Leaning Based Multi-Modal Fusion for Fast MR Reconstruction

AU - Xiang, Lei

AU - Chen, Yong

AU - Chang, Weitang

AU - Zhan, Yiqiang

AU - Lin, Weili

AU - Wang, Qian

AU - Shen, Dinggang

PY - 2018/1/1

Y1 - 2018/1/1

N2 - T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k-space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully-sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we present a novel deep learning approach, namely Dense-Unet, to accomplish the reconstruction task. The proposed Dense-Unet requires fewer parameters and less computation, while achieving promising performance. Our results have shown that Dense-Unet can reconstruct a 3D T2WI volume in less than 10 seconds with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio (SNR) loss. Experiments also demonstrate excellent transferring capability of Dense-Unet when applied to the datasets acquired by different MR scanners. The above results imply great potential of our method in many clinical scenarios.

AB - T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k-space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully-sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we present a novel deep learning approach, namely Dense-Unet, to accomplish the reconstruction task. The proposed Dense-Unet requires fewer parameters and less computation, while achieving promising performance. Our results have shown that Dense-Unet can reconstruct a 3D T2WI volume in less than 10 seconds with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio (SNR) loss. Experiments also demonstrate excellent transferring capability of Dense-Unet when applied to the datasets acquired by different MR scanners. The above results imply great potential of our method in many clinical scenarios.

KW - Deep learning

KW - Dense Block

KW - Fast MR Reconstruction

KW - Multi-Model Fusion

UR - http://www.scopus.com/inward/record.url?scp=85057824177&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85057824177&partnerID=8YFLogxK

U2 - 10.1109/TBME.2018.2883958

DO - 10.1109/TBME.2018.2883958

M3 - Article

C2 - 30507491

AN - SCOPUS:85057824177

JO - IEEE Transactions on Biomedical Engineering

JF - IEEE Transactions on Biomedical Engineering

SN - 0018-9294

ER -