TY - GEN
T1 - Convolutional neural network for reconstruction of 7T-like images from 3T MRI using appearance and anatomical features
AU - Bahrami, Khosro
AU - Shi, Feng
AU - Rekik, Islem
AU - Shen, Dinggang
PY - 2016
Y1 - 2016
N2 - The advanced 7 Tesla (7T) Magnetic Resonance Imaging (MRI) scanners provide images with higher resolution anatomy than 3T MRI scanners, thus facilitating early diagnosis of brain diseases. However, 7T MRI scanners are less accessible, compared to the 3T MRI scanners. This motivates us to reconstruct 7T-like images from 3T MRI. We propose a deep architecture for Convolutional Neural Network (CNN), which uses the appearance (intensity) and anatomical (labels of brain tissues) features as input to non-linearly map 3T MRI to 7T MRI. In the training step, we train the CNN by feeding it with both appearance and anatomical features of the 3T patch. This outputs the intensity of center voxel in the corresponding 7T patch. In the testing step, we apply the trained CNN to map each input 3T patch to the 7T-like image patch. Our performance is evaluated on 15 subjects, each with both 3T and 7T MR images. Both visual and numerical results show that our method outperforms the comparison methods.
AB - The advanced 7 Tesla (7T) Magnetic Resonance Imaging (MRI) scanners provide images with higher resolution anatomy than 3T MRI scanners, thus facilitating early diagnosis of brain diseases. However, 7T MRI scanners are less accessible, compared to the 3T MRI scanners. This motivates us to reconstruct 7T-like images from 3T MRI. We propose a deep architecture for Convolutional Neural Network (CNN), which uses the appearance (intensity) and anatomical (labels of brain tissues) features as input to non-linearly map 3T MRI to 7T MRI. In the training step, we train the CNN by feeding it with both appearance and anatomical features of the 3T patch. This outputs the intensity of center voxel in the corresponding 7T patch. In the testing step, we apply the trained CNN to map each input 3T patch to the 7T-like image patch. Our performance is evaluated on 15 subjects, each with both 3T and 7T MR images. Both visual and numerical results show that our method outperforms the comparison methods.
UR - http://www.scopus.com/inward/record.url?scp=84992512809&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84992512809&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-46976-8_5
DO - 10.1007/978-3-319-46976-8_5
M3 - Conference contribution
AN - SCOPUS:84992512809
SN - 9783319469751
VL - 10008 LNCS
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 39
EP - 47
BT - Deep Learning and Data Labeling for Medical Applications - 1st International Workshop, LABELS 2016, and 2nd International Workshop, DLMIA 2016 Held in Conjunction with MICCAI 2016, Proceedings
PB - Springer Verlag
T2 - 1st International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, LABELS 2016 and 2nd International Workshop on Deep Learning in Medical Image Analysis, DLMIA 2016 held in conjunction with 19th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2016
Y2 - 21 October 2016 through 21 October 2016
ER -