Volume XLII-3
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-3, 2257-2261, 2018
https://doi.org/10.5194/isprs-archives-XLII-3-2257-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-3, 2257-2261, 2018
https://doi.org/10.5194/isprs-archives-XLII-3-2257-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  30 Apr 2018

30 Apr 2018

EXTRACTING 3D SEMANTIC INFORMATION FROM VIDEO SURVEILLANCE SYSTEM USING DEEP LEARNING

J. S. Zhang1, J. Cao1, B. Mao1, and D. Q. Shen2 J. S. Zhang et al.
  • 1Nanjing University of Finance & Economics, College of Information Engineering, Collaborative Innovation Center for Modern Grain Circulation and Safety, Jiangsu Key Laboratory of Modern Logistics, Nanjing, 210023, China
  • 2Nanjing University of Science & Technology, Nanjing, 210094, China

Keywords: 3-D space, Camera calibration, Target recognition, Target tracking

Abstract. At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can’t reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.