Visible and Thermal Camera-based Jaywalking Estimation using a Hierarchical Deep Learning Framework

Vijay John, Simon Thompson, Annamalai Lakshmanan, Seiichi Mita

Abstract: Jaywalking is an abnormal pedestrian behavior which significantly increases the risk of road accidents. Owing to this risk, autonomous driving applications should robustly estimate the jaywalking pedestrians. However, the task of robustly estimating jaywalking is not trivial, especially in the case of visible camera-based estimation. In this work, a two-step hierarchical deep learning formulation using visible and thermal camera is proposed to address these challenges. The two steps are comprised of a deep learning-based scene classifier and two scene-specific semantic segmentation frameworks. The scene classifier classifies the visible-thermal image into legal pedestrian crossing and illegal pedestrian crossing scenes. The two scene-specific segmentation frameworks estimate the normal pedestrians and jaywalking pedestrians. The two segmentation frameworks are individually trained on the legal or illegal crossing scenes. The proposed framework is validated on the FLIR public dataset and compared with baseline algorithms. The experimental results show that the proposed hierarchical strategy reports better accuracy than baseline algorithms in real-time.

SlidesLive