Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-4/W5, 103-106, 2015
© Author(s) 2015. This work is distributed
under the Creative Commons Attribution 3.0 License.
11 May 2015
S. Chhatkuli, T. Satoh, and K. Tachibana PASCO CORPORATION, Research & Development HQ, 2-8-10 Higashiyama, Meguro-ku, Tokyo, Japan
Keywords: 3D model, 3D TIN, Data fusion, Point cloud Abstract. The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other’s weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
Conference paper (PDF, 1604 KB)

Citation: Chhatkuli, S., Satoh, T., and Tachibana, K.: MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-4/W5, 103-106, doi:10.5194/isprsarchives-XL-4-W5-103-2015, 2015.

BibTeX EndNote Reference Manager XML