TY - JOUR
T1 - RGB×D
T2 - Learning depth-weighted RGB patches for RGB-D indoor semantic segmentation
AU - Cao, Jinming
AU - Leng, Hanchao
AU - Cohen-Or, Daniel
AU - Lischinski, Dani
AU - Chen, Ying
AU - Tu, Changhe
AU - Li, Yangyan
N1 - Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/10/28
Y1 - 2021/10/28
N2 - Significant advances have been made in designing CNNs for RGB semantic segmentation. However, these CNNs are not widely adopted for RGB-D segmentation, due to the asymmetry between the RGB and depth modalities. Instead, dedicated architectures are designed to fuse them for effective RGB-D segmentation, wherein complex structures are often employed, resulting in much increased computational cost. In this paper, we propose a novel way to learn the fusion of RGB and depth information in an early stage. This enables our method to easily adopt existing RGB segmentation networks with minimal modification. Our method is simple yet effective to build a bridge between RGB and RGBD semantic segmentation, so as to avoid designing a far more complex network structure for RGBD segmentation. The proposed method treats RGB and depth information in an inherently asymmetric manner, and to the best of our knowledge, this is the first approach that learns to fuse them in a multiplicative manner for RGB-D segmentation; thus, we call it RGB×D. Extensive experiments and ablation studies on the challenging NYUDv2, SUN RGB-D and Cityscapes semantic segmentation benchmarks show that the proposed RGB×D offers a consistent improvement over several baselines.
AB - Significant advances have been made in designing CNNs for RGB semantic segmentation. However, these CNNs are not widely adopted for RGB-D segmentation, due to the asymmetry between the RGB and depth modalities. Instead, dedicated architectures are designed to fuse them for effective RGB-D segmentation, wherein complex structures are often employed, resulting in much increased computational cost. In this paper, we propose a novel way to learn the fusion of RGB and depth information in an early stage. This enables our method to easily adopt existing RGB segmentation networks with minimal modification. Our method is simple yet effective to build a bridge between RGB and RGBD semantic segmentation, so as to avoid designing a far more complex network structure for RGBD segmentation. The proposed method treats RGB and depth information in an inherently asymmetric manner, and to the best of our knowledge, this is the first approach that learns to fuse them in a multiplicative manner for RGB-D segmentation; thus, we call it RGB×D. Extensive experiments and ablation studies on the challenging NYUDv2, SUN RGB-D and Cityscapes semantic segmentation benchmarks show that the proposed RGB×D offers a consistent improvement over several baselines.
KW - Deep learning
KW - Depth information
KW - RGB-D indoor semantic segmentation
UR - http://www.scopus.com/inward/record.url?scp=85113296202&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2021.08.009
DO - 10.1016/j.neucom.2021.08.009
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85113296202
SN - 0925-2312
VL - 462
SP - 568
EP - 580
JO - Neurocomputing
JF - Neurocomputing
ER -