Interactive segmentation that extracts a specific foreground selected by the user input is widely employed in many user-interactive applications such as image editing and ground-truth labeling. In general, most interactive segmentation methods iteratively refine the previously obtained result using additional user interactions because they often produce unsatisfactory results with a single user input. A recently developed convolutional neural network (CNN)-based interactive segmentation method called deep interactive object selection has achieved high segmentation accuracy with fewer user interactions than earlier non-CNN-based approaches. However, the computational efficiency of deep interactive object selection deteriorates due to the repetitive feature extraction stage for each user interaction. Furthermore, the deep interactive object selection requires graph cut as a post-processing step to refine the boundary segments. To solve this problem, this paper presents a deep CNN-based interactive segmentation method employing an effective and simple user interaction-based attention module that does not require the repetitive feature extraction. In addition, we adopt Cartesian to polar coordinate transformation to further improve the segmentation performance. Experimental results demonstrate that the proposed interactive segmentation method is superior to the conventional ones in terms of segmentation accuracy and computational efficiency.