LEARNING THE 3D POSE OF VEHICLES FROM 2D VEHICLE PATCHES
Keywords: Pose Estimation, Deep Learning, Trajectory Extraction, Surveillance Video Analysis, Trajectory Analysis
Abstract. Estimating vehicle poses is crucial for generating precise movement trajectories from (surveillance) camera data. Additionally for real time applications this task has to be solved in an efficient way. In this paper we introduce a deep convolutional neural network for pose estimation of vehicles from image patches. For a given 2D image patch our approach estimates the 2D coordinates of the image representing the exact center ground point (cx, cy) and the orientation of the vehicle - represented by the elevation angle (e) of the camera with respect to the vehicle’s center ground point and the azimuth rotation (a) of the vehicle with respect to the camera. To train a accurate model a large and diverse training dataset is needed. Collecting and labeling such large amount of data is very time consuming and expensive. Due to the lack of a sufficient amount of training data we show furthermore, that also rendered 3D vehicle models with artificial generated textures are nearly adequate for training.