Driver Attention Monitoring using Facial Features
How can we assess the quality of human driving using AI? Driver inattention is one of the leading
causes of vehicle crashes and incidents worldwide. Driver inattention includes driver fatigue leading to
drowsiness and driver distraction, say due to the use of cellphone or rubbernecking, all of which leads
to a lack of situational awareness. Hitherto, techniques presented to monitor driver attention evaluated
factors such as fatigue and distraction independently. However, to develop a robust driver attention
monitoring system, all the factors affecting a driver’s attention needs to be analyzed holistically. In this
thesis, we present two novel approaches for driver attention analysis on the road using driver video and
fusion of driver and road video.
In the first approach, we propose the driver attention rating system that leverages the front camera of a windshield-mounted smartphone to monitor the driver attention by combining several features. We derive a driver attention rating by fusing spatio-temporal features based on the driver state and behavior such as head pose, eye gaze, eye closure, yawns, use of cellphones, etc. We present a few architec- tures for feature aggregation like AutoRate and Attention-based AutoRate. We perform an extensive evaluation of feature aggregation networks on real-world driving data and also data from controlled, static vehicle settings with 30 drivers in a large city. We compare the proposed method’s automatically- generated rating with the scores given by 5 human annotators. We introduce the kappa coefficient, an evaluation metric to compute the inter-rater agreement between the generated rating and the rating pro- vided by human annotators. We observe that Attention-based AutoRate outperforms other proposed designs for feature aggregation by 10%. Further, we use the learned temporal and spatial attention to visualize the key frame and the key action, which justifies the model’s predicted rating. Finally, to pro- vide driver-specific results, we fine-tune the Attention-based AutoRate model using the specific driver data to give personalized driver experience.
|Year of completion:||June 2020|
|Advisor :||Prof. C.V. Jawahar|