Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-5, 479-485, 2014
https://doi.org/10.5194/isprsarchives-XL-5-479-2014
© Author(s) 2014. This work is distributed under
the Creative Commons Attribution 3.0 License.
 
06 Jun 2014
Kinect Fusion improvement using depth camera calibration
D. Pagliari1, F. Menna2, R. Roncella3, F. Remondino2, and L. Pinto1 1DICA-sez. Geodesia e Geomatica, Politecnico di Milano, Milan, Italy
2D Optical Metrology (3DOM) unit, Bruno Kessler Foundation (FBK), Trento, Italy
3DICATeA, Parma University, Parma, Italy
Keywords: Calibration, Depth Map, Kinect, 3D modelling, Fusion Libraries Abstract. Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.
Conference paper (PDF, 1177 KB)


Citation: Pagliari, D., Menna, F., Roncella, R., Remondino, F., and Pinto, L.: Kinect Fusion improvement using depth camera calibration, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-5, 479-485, https://doi.org/10.5194/isprsarchives-XL-5-479-2014, 2014.

BibTeX EndNote Reference Manager XML