The rapid advancement of deep learning has catalyzed progress in salient object detection (SOD), extending its impact to the domain of optic
The rapid advancement of deep learning has catalyzed progress in salient object detection (SOD), extending its impact to the domain of optical remote sensing images (ORSIs). Despite increasing attention, salient object detection for optical remote sensing images (ORSI-SOD) remains highly challenging due to the intrinsic complexities of remote sensing scenes. In particular, severe variations in object scale and quantity, cluttered backgrounds, and irregular object morphologies significantly hinder accurate target localization and boundary delineation. In response to these challenges, we introduce the Multi-scale Contextual Fusion Network (MCFNet) for ORSI-SOD. MCFNet incorporates a Semantic-Aware Attention Module (SAM), which provides explicit semantic guidance during feature extraction. By producing preliminary semantic masks, SAM enables the network to capture long-range contextual dependencies, thereby enhancing localization accuracy for salient objects exhibiting substantial scale variation and structural complexity. In addition, MCFNet integrates a Contextual Interconnection Module (CIM), which promotes effective fusion of local and global contextual features. By facilitating cross-layer interactions and adopting a multiscale refinement strategy, CIM enriches texture representations while suppressing background interference, leading to smoother object boundaries and more precise delineation of salient regions. Extensive evaluations conducted on three standard ORSI-SOD benchmark datasets demonstrate the superior performance of MCFNet compared to existing methods, highlighting its robustness and efficiency in handling challenging remote sensing scenarios. [ABSTRACT FROM AUTHOR]
Copyright of Sensors (14248220) is the property of MDPI and its content may not be copied or emailed to multiple sites or posted to a listse
Copyright of Sensors (14248220) is the property of MDPI and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)