TY - JOUR
T1 - Estimating CT Image from MRI Data Using Structured Random Forest and Auto-Context Model
AU - Alzheimer's Disease Neuroimaging Initiative
AU - Huynh, Tri
AU - Gao, Yaozong
AU - Kang, Jiayin
AU - Wang, Li
AU - Zhang, Pei
AU - Lian, Jun
AU - Shen, Dinggang
N1 - Funding Information:
Part of the data collection and sharing for this project was funded by the Alzheimer''s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). This work was partially supported by NIH grants (EB006733, EB008374, EB009634, MH100217, AG041721, AG042599, CA140413). This work was done for the Alzheimer''s Disease Neuroimaging Initiative (ADNI).
Publisher Copyright:
© 2015 IEEE.
PY - 2016/1
Y1 - 2016/1
N2 - Computed tomography (CT) imaging is an essential tool in various clinical diagnoses and radiotherapy treatment planning. Since CT image intensities are directly related to positron emission tomography (PET) attenuation coefficients, they are indispensable for attenuation correction (AC) of the PET images. However, due to the relatively high dose of radiation exposure in CT scan, it is advised to limit the acquisition of CT images. In addition, in the new PET and magnetic resonance (MR) imaging scanner, only MR images are available, which are unfortunately not directly applicable to AC. These issues greatly motivate the development of methods for reliable estimate of CT image from its corresponding MR image of the same subject. In this paper, we propose a learning-based method to tackle this challenging problem. Specifically, we first partition a given MR image into a set of patches. Then, for each patch, we use the structured random forest to directly predict a CT patch as a structured output, where a new ensemble model is also used to ensure the robust prediction. Image features are innovatively crafted to achieve multi-level sensitivity, with spatial information integrated through only rigid-body alignment to help avoiding the error-prone inter-subject deformable registration. Moreover, we use an auto-context model to iteratively refine the prediction. Finally, we combine all of the predicted CT patches to obtain the final prediction for the given MR image. We demonstrate the efficacy of our method on two datasets: human brain and prostate images. Experimental results show that our method can accurately predict CT images in various scenarios, even for the images undergoing large shape variation, and also outperforms two state-of-the-art methods.
AB - Computed tomography (CT) imaging is an essential tool in various clinical diagnoses and radiotherapy treatment planning. Since CT image intensities are directly related to positron emission tomography (PET) attenuation coefficients, they are indispensable for attenuation correction (AC) of the PET images. However, due to the relatively high dose of radiation exposure in CT scan, it is advised to limit the acquisition of CT images. In addition, in the new PET and magnetic resonance (MR) imaging scanner, only MR images are available, which are unfortunately not directly applicable to AC. These issues greatly motivate the development of methods for reliable estimate of CT image from its corresponding MR image of the same subject. In this paper, we propose a learning-based method to tackle this challenging problem. Specifically, we first partition a given MR image into a set of patches. Then, for each patch, we use the structured random forest to directly predict a CT patch as a structured output, where a new ensemble model is also used to ensure the robust prediction. Image features are innovatively crafted to achieve multi-level sensitivity, with spatial information integrated through only rigid-body alignment to help avoiding the error-prone inter-subject deformable registration. Moreover, we use an auto-context model to iteratively refine the prediction. Finally, we combine all of the predicted CT patches to obtain the final prediction for the given MR image. We demonstrate the efficacy of our method on two datasets: human brain and prostate images. Experimental results show that our method can accurately predict CT images in various scenarios, even for the images undergoing large shape variation, and also outperforms two state-of-the-art methods.
UR - http://www.scopus.com/inward/record.url?scp=84959335586&partnerID=8YFLogxK
U2 - 10.1109/TMI.2015.2461533
DO - 10.1109/TMI.2015.2461533
M3 - Article
C2 - 26241970
AN - SCOPUS:84959335586
VL - 35
SP - 174
EP - 183
JO - IEEE Transactions on Medical Imaging
JF - IEEE Transactions on Medical Imaging
SN - 0278-0062
IS - 1
M1 - 7169564
ER -