img

Notice détaillée

DoubleU-NetPlus

a novel attention and context-guided dual U-Net with multi-scale residual feature fusion network for semantic segmentation of medical images

Article Ecrit par: Ahmed, Md. Rayhan ; Ferdous Ashrafi, Adnan ; Uddin Ahmed, Raihan ; Shatabda, Swakkhar ; Muzahidul Islam, A. K. M. ; Islam, Salekul ;

Résumé: Accurate segmentation of the region of interest in medical images can provide an essential pathway for devising effective treatment plans for life-threatening diseases. It is still challenging for U-Net, and its modern state-of-the-art variants to effectively model the higher-level output feature maps of the convolutional units of the network mostly due to the presence of various scales of the region of interest, the intricacy of context environments, ambiguous boundaries, and multiformity of textures in medical images. In this paper, we exploit multi-contextual features and several attention strategies to increase networks' ability to model discriminative feature representation for more accurate medical image segmentation, and we present a novel dual-stacked U-Net-based architecture named DoubleU-NetPlus. The DoubleU-NetPlus incorporates several architectural modifications. In particular, we integrate EfficientNetB7 as the feature encoder module, a newly designed multi-kernel residual convolution module, and an adaptive feature re-calibrating attention-based atrous spatial pyramid pooling module to progressively and precisely accumulate discriminative multi-scale high-level contextual feature maps and emphasize the salient regions. In addition, we introduce a novel triple attention gate module and a hybrid triple attention module to encourage selective modeling of relevant medical image features. Moreover, to mitigate the gradient vanishing issue while incorporating high-resolution features with deeper spatial details, the standard convolution operation is replaced with the attention-guided residual convolution operations, which enables the model to achieve effective and relevant feature maps from a significantly increased network depth. Empirical results confirm that the proposed model accomplishes superior semantic segmentation performance compared to other state-of-the-art approaches on six publicly available benchmark datasets of diverse modalities. The proposed network achieves a Dice score of 85.17%, 99.34%, 94.30%, 96.40%, 95.76%, and 97.10% on DRIVE, LUNA, BUSI, CVCclinicDB, 2018 DSB, and ISBI 2012 datasets.


Langue: Anglais