Abstract:
Medical image segmentation is a fundamental and challenge task in many computer-aided diagnosis and surgery systems, and attracts numerous research attention in computer vision and medical image processing fields. Recently, deep learning based medical image segmentation has been widely investigated and provided state-of-the-art performance for different modalities of medical data. Therein, U-Net consisting of the contracting path for context capturing and the symmetric expanding path for precise localization, has become a meta network architecture for medical image segmentation, and manifests acceptable results even with moderate scale of training data. This study proposes a novel attention modulated network based on the baseline U-Net, and explores embedded spatial and channel attention modules for adaptively highlighting interdependent channel maps and focusing on more discriminant regions via investigating relevant feature association. The proposed spatial and channel attention modules can be used in a plug and play manner and embedded after any learned feature map for adaptively emphasizing discriminant features and neglecting irrelevant information. Furthermore, we propose two aggregation approaches for integrating the learned spatial and channel attentions to the raw feature maps. Extensive experiments on two benchmark medical image datasets validate that our proposed network architecture manifests superior performance compared to the baseline U-Net and its several variants.