Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-4/W5, 55-60, 2015
© Author(s) 2015. This work is distributed under
the Creative Commons Attribution 3.0 License.
11 May 2015
T. Teo Dept. of Civil Engineering, National Chiao Tung University, Hsinchu, 30010, Taiwan
Keywords: Action cameras, Image, Video, Point clouds Abstract. Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.
Conference paper (PDF, 1000 KB)

Citation: Teo, T.: VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-4/W5, 55-60,, 2015.

BibTeX EndNote Reference Manager XML