Image segmentation is a crucial step in many computer-aided medical image analysis tasks, e.g., automated radiation therapy. However, low tissue-contrast and large amounts of artifacts in medical images, i.e., CT or MR images, corrupt the true boundaries of the target tissues and adversely influence the precision of boundary localization in segmentation. To precisely locate blurry and missing boundaries, human observers often use high-resolution context information from neighboring regions. To extract such information and achieve fine-grained segmentation (high accuracy on the boundary regions and small-scale targets), we propose a novel hierarchical dilated network. In the hierarchy, to maintain precise location information, we adopt dilated residual convolutional blocks as basic building blocks to reduce the dependency of the network on downsampling for receptive field enlargement and semantic information extraction. Then, by concatenating the intermediate feature maps of the serially-connected dilated residual convolutional blocks, the resultant hierarchical dilated module (HD-module) can encourage more smooth information flow and better utilization of both high-level semantic information and low-level textural information. Finally, we integrate several HD-modules in different resolutions in a parallel connection fashion to finely collect information from multiple (more than 12) scales for the network. The integration is defined by a novel late fusion module proposed in this paper. Experimental results on pelvic organ CT image segmentation demonstrate the superior performance of our proposed algorithm to the state-of-the-art deep learning segmentation algorithms, especially in localizing the organ boundaries.