Volume XLII-2/W12
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W12, 53-59, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W12-53-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W12, 53-59, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W12-53-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.

  09 May 2019

09 May 2019

THE STUDY OF ACTIVATION FUNCTIONS IN DEEP LEARNING FOR PEDESTRIAN DETECTION AND TRACKING

M. N. Favorskaya and V. V. Andreev M. N. Favorskaya and V. V. Andreev
  • Reshetnev Siberian State University of Science and Technology, Institute of Informatics and Telecommunications, 31, Krasnoyarsky Rabochy ave., Krasnoyarsk, 660037 Russian Federation

Keywords: Deep Learning, Activation Function, Pedestrian Detection, Feature Extraction, Pedestrian Tracking

Abstract. Pedestrian detection and tracking remains a highlight research topic due to its paramount importance in the fields of video surveillance, human-machine interaction, and tracking analysis. At present time, pedestrian detection is still an open problem because of many challenges of image representation in the outdoor and indoor scenes. In recent years, deep learning, in particular Convolutional Neural Networks (CNNs) became the state-of-the-art in terms of accuracy in many computer vision tasks. The unsupervised learning of CNNs is still an open issue. In this paper, we study a matter of feature extraction using a special activation function. Most of CNNs share the same architecture, when each convolutional layer is followed by a nonlinear activation layer. The activation function Rectified Linear Unit (ReLU) is the most widely used as a fast alternative to sigmoid function. We propose a bounded randomized leaky ReLU working in such manner that the angle of linear part with the highest input values is tuned during learning stage, and this linear part can be directed not only upward but also downward using a variable bias for its starting point. The bounded randomized leaky ReLU was tested on Caltech Pedestrian Dataset with promising results.