Accurate localization of prostate in CT images plays an important role in image guided radiation therapy. However, it has two main challenges. The first challenge is due to the low contrast in CT images. The other challenge is due to the uncertainty of the existence of bowel gas. In this paper, a learning based hierarchical framework is proposed to address these two challenges. The main contributions of the proposed framework lie in the following aspects: (1) Anatomical features are extracted from input images, and the most salient features at distinctive image regions are selected to localize the prostate. Regions with salient features but irrelevant to prostate localization are also filtered out. (2) An image similarity measure function is explicitly defined and learnt to enforce the consistency between the distance of the learnt features and the underlying prostate alignment. (3) An online learning mechanism is used to adaptively integrate both the inter-patient and patient-specific information to localize the prostate. Based on the learnt image similarity measure function, the planning image of the underlying patient is aligned to the new treatment image for segmentation. The proposed method is evaluated on 163 3D prostate CT images of 10 patients, and promising experimental results are obtained.