The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2021
https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-427-2021
https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-427-2021
28 Jun 2021
 | 28 Jun 2021

TOWARDS LEARNING LOW-LIGHT INDOOR SEMANTIC SEGMENTATION WITH ILLUMINATION-INVARIANT FEATURES

N. Zhang, F. Nex, N. Kerle, and G. Vosselman

Keywords: Semantic Segmentation, Dataset, Low-light, Image Decomposition, Deep Learning, Scene Understanding

Abstract. Semantic segmentation models are often affected by illumination changes, and fail to predict correct labels. Although there has been a lot of research on indoor semantic segmentation, it has not been studied in low-light environments. In this paper we propose a new framework, LISU, for Low-light Indoor Scene Understanding. We first decompose the low-light images into reflectance and illumination components, and then jointly learn reflectance restoration and semantic segmentation. To train and evaluate the proposed framework, we propose a new data set, namely LLRGBD, which consists of a large synthetic low-light indoor data set (LLRGBD-synthetic) and a small real data set (LLRGBD-real). The experimental results show that the illumination-invariant features effectively improve the performance of semantic segmentation. Compared with the baseline model, the mIoU of the proposed LISU framework has increased by 11.5%. In addition, pre-training on our synthetic data set increases the mIoU by 7.2%. Our data sets and models are available on our project website.