Computed tomography (CT) is widely used for dose planning in the radiotherapy of prostate cancer. However,CT has low tissue contrast,thus making manual contouring difficult. In contrast,magnetic resonance (MR) image provides high tissue contrast and is thus ideal for manual contouring. If MR image can be registered to CT image of the same patient,the contouring accuracy of CT could be substantially improved,which could eventually lead to high treatment efficacy. In this paper,we propose a learning-based approach for multimodal image registration. First,to fill the appearance gap between modalities,a structured random forest with auto-context model is learnt to synthesize MRI from CT and vice versa. Then,MRI-to-CT registration is steered in a dual manner of registering images with same appearances,i.e.,(1) registering the synthesized CT with CT,and (2) also registering MRI with the synthesized MRI. Next,a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration results. Experiments on pelvic CT and MR images have shown the improved registration performance by our proposed method,compared with the existing nonlearning based registration methods.