The suitable employment of RGB and depth data shows great significance in promoting the development of computer vision tasks and robot-environment interactions. However, there are different advantages and disadvantages in the early and late fusion of the two types of data. In addition, due to the diversity of the object information, using a single type of data in a specific scenario results in being semantically misleading. Based on the above considerations, we propose a transformer-based adaptively cooperative dynamic fusion network (ACDNet) with a dynamic composite structure (DCS) for salient object detection. This structure is designed to flexibly utilize the advantages of feature fusion in different stages. Second, an adaptively cooperative semantic guidance (ACG) scheme is designed to suppress inaccurate features in multilevel multimodal feature fusion. Furthermore, we proposed a perceptual aggregation module (PAM) to optimize the network from the perspectives of spatial perception and scale perception, which strengthens the network's ability to perceive multiscale objects. Extensive experiments conducted on 8 RGB-D SOD datasets illustrate that the proposed network outperforms 24 state-of-the-art algorithms.(C) 2022 Elsevier B.V. All rights reserved.

Boosting RGB-D salient object detection with adaptively cooperative dynamic fusion network / Zhu, Jinchao; Zhang, Xiaoyu; Fang, Xian; Rahman, MUHAMMAD RAMEEZ UR; Dong, Feng; Li, Yuehua; Yan, Siyu; Tan, Panlong. - In: KNOWLEDGE-BASED SYSTEMS. - ISSN 0950-7051. - 251:(2022), p. 109205. [10.1016/j.knosys.2022.109205]

Boosting RGB-D salient object detection with adaptively cooperative dynamic fusion network

Muhammad Rameez Ur Rahman
;
2022

Abstract

The suitable employment of RGB and depth data shows great significance in promoting the development of computer vision tasks and robot-environment interactions. However, there are different advantages and disadvantages in the early and late fusion of the two types of data. In addition, due to the diversity of the object information, using a single type of data in a specific scenario results in being semantically misleading. Based on the above considerations, we propose a transformer-based adaptively cooperative dynamic fusion network (ACDNet) with a dynamic composite structure (DCS) for salient object detection. This structure is designed to flexibly utilize the advantages of feature fusion in different stages. Second, an adaptively cooperative semantic guidance (ACG) scheme is designed to suppress inaccurate features in multilevel multimodal feature fusion. Furthermore, we proposed a perceptual aggregation module (PAM) to optimize the network from the perspectives of spatial perception and scale perception, which strengthens the network's ability to perceive multiscale objects. Extensive experiments conducted on 8 RGB-D SOD datasets illustrate that the proposed network outperforms 24 state-of-the-art algorithms.(C) 2022 Elsevier B.V. All rights reserved.
2022
RGB-Dsalientobjectdetection; Gatedmechanism; Dilatedconvolution; Earlyfusionandlatefusion
01 Pubblicazione su rivista::01a Articolo in rivista
Boosting RGB-D salient object detection with adaptively cooperative dynamic fusion network / Zhu, Jinchao; Zhang, Xiaoyu; Fang, Xian; Rahman, MUHAMMAD RAMEEZ UR; Dong, Feng; Li, Yuehua; Yan, Siyu; Tan, Panlong. - In: KNOWLEDGE-BASED SYSTEMS. - ISSN 0950-7051. - 251:(2022), p. 109205. [10.1016/j.knosys.2022.109205]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1657578
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact