img

Notice détaillée

Spatial attention-guided deformable fusion network for salient object detection

Article Ecrit par: Yang, Aiping ; Liu, Yan ; Cheng, Simeng ; Cao, Jiale ; Ji, Zhong ; Pang, Yanwei ;

Résumé: Most of salient object detection methods employ U-shape architecture as the understructure. Although promising performance has been achieved, they struggle to detect salient objects with non-rigid shapes and arbitrary sizes. Besides, the features are transmitted to the decoder directly without any discrimination and active selection, resulting in prominent features underutilized. To address the above issues, we propose a spatial-attention-guided deformable fusion network for salient object detection, which consists of a contour enhancement module (CEM), a spatial-attention-guided deformable fusion module (SADFM) and a gate module (GM). Specifically, the CEM is designed to obtain global features, aiming to reduce the loss of high-level features in the transfer process. The SADFM develops the spatial attention to guide the deformable convolution to aggregate global features, high-level and low-level features adaptively. Furthermore, the GM is employed to refine the initial fusion features and predict the salient regions accurately. Experiments on five public datasets verify the effectiveness of our method.


Langue: Anglais