The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Download
Citation
Articles | Volume XLVI-3/W1-2022
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLVI-3/W1-2022, 37–43, 2022
https://doi.org/10.5194/isprs-archives-XLVI-3-W1-2022-37-2022
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLVI-3/W1-2022, 37–43, 2022
https://doi.org/10.5194/isprs-archives-XLVI-3-W1-2022-37-2022
 
22 Apr 2022
22 Apr 2022

A HIGH PRECISION VISUAL LOCALIZATION METHOD OPTIMIZED BY MULTI-FEATURES

Y. Deng, S. Tang, W. Wang, X. Li, and R. Guo Y. Deng et al.
  • School of Architecture and Urban Planning, Research Institute for Smart Cities, Shenzhen University & Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Natural Resources, Shenzhen, P.R. China

Keywords: Visual Localization, Image Retrieval, Place Recognition, Indoor Localization, Pose Estimation, Image Matching

Abstract. The demand for indoor localization has increased in fields such as indoor navigation, virtual reality and emergency response. Traditionally, hardware-based indoor positioning methods require a large number of devices to be deployed and require high maintenance costs. Vision-based localization methods offer a low-cost option for this purpose. Visual Localization has two typical pipeline: end-to-end study and traditional pose estimation based on PnP(Perspective-n-point). However, the quality of the retrieved images and 2D-3D correspondences is vital to the precision and recall of the traditional method. In this paper we try to partly overcome the mentioned drawback by eliminate the error retrieval images with multi-features, and we use several retrieved images to collect enough 2D-3D correspondences to improve the robustness against error input. We also filter the outliers during forming the 2D-3D correspondences with RANSAC and Lowe’s ratio test. As a supplement to the various indoor visual localization dataset production, we introduce a pipeline which can generate point clouds and mesh model via our integrated RGB-D cameras.