The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLII-2/W13
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W13, 1681–1686, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W13-1681-2019
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W13, 1681–1686, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W13-1681-2019

  05 Jun 2019

05 Jun 2019

DEEP LIDAR ODOMETRY

Q. Li1, C. Wang1,2, S. Chen1, X. Li3, C. Wen1, M. Cheng1, and J. Li1,4 Q. Li et al.
  • 1Fujian Key Laboratory of Sensing and Computing for Smart City and the School of Information Science and Engineering, Xiamen University, Xiamen 361005, China
  • 2Fujian Collaborative Innovation Center for Big Data Applications in Governments, Fuzhou 350003, China
  • 3Geometric and Visual Computing Group, Louisiana State University, LA 70808, USA
  • 4GeoSTARS Lab, the Department of Geography and Environmental Management, University of Waterloo, Canada

Keywords: Lidar, Odometry, Motion Estimation, Neural Network, Mask, Point Cloud

Abstract. Most existing lidar odometry estimation strategies are formulated under a standard framework that includes feature selection, and pose estimation through feature matching. In this work, we present a novel pipeline called LO-Net for lidar odometry estimation from 3D lidar scanning data using deep convolutional networks. The network is trained in an end-to-end manner, it infers 6-DoF poses from the encoded sequential lidar data. Based on the new designed mask-weighted geometric constraint loss, the network automatically learns effective feature representation for the lidar odometry estimation problem, and implicitly exploits the sequential dependencies and dynamics. Experiments on benchmark datasets demonstrate that LO-Net has similar accuracy with the geometry-based approach.