DGAZE Dataset for driver gaze mapping on road

Isha Dua   Thrupthi John   Riya Gupta   C.V. Jawahar

Mercedes Benz   IIIT Hyderabad   IIIT Hyderabad   IIIT Hyderabad

IROS 2020


road view

road view

Dataset Overview

DGAZE is a new dataset for mapping the driver's gaze onto the road. Currently, driver gaze datasets are collected using eye-tracking hardware which are expensive and cumbersome, and thus unsuited for use during testing. Thus, our dataset is designed so that no costly equipment is required during test time. Models trained using our dataset requires only a dashboard-mounted mobile phone during deployment, as our data is collected using mobile phones. We collect the data in a lab setting with a video of a road projected in front of the driver. We overcome the limitation of not using eye trackers by annotating points on the road video and asking the drivers to look at them. For more details, please refer to our paper.
[ paper ]
We collected road videos using mobile phones mounted on the dashboards of cars driven in the city. We combined the road videos to create a single 18-minute video that had a good mix of road, lighting, and traffic conditions. The road images have varied illumination as the images are captured from morning to evening in the real cars on actual roads. For each frame, we annotated a single object belonging to one of the classes: car, bus, motorbike, pedestrian, auto-rickshaw, traffic signal and sign board. We also marked the center of each bounding box to serve as the groundtruth for the point detection challenge. We annotated objects that typically take up a driver's attention such as relevant signage, pedestrians, and intercepting vehicles.

Use Cases

The task of driver eye-gaze can be solved as point-wise prediction or object-wise prediction. We provide annotation so that our dataset can be used for pointwise as well as object-wise prediction. Both types of eye-gaze prediction are useful. Predicting the object which the driver is looking at is useful for higher-level ADAS systems. This may be done by getting object candidates using an object detection algorithm and using the eye gaze to predict which object is being observed. Object prediction can be used to determine whether a driver is focusing on a pedestrian or if they noticed a signboard for example. Point-wise prediction is much more fine-grained and are more useful for nearby objects, as they show which part of the object is being focused on. They can be used to determine the saccade patterns of the eyes or to create a heatmap of the attention of a driver. They may even be converted into object-wise prediction. Our dataset allows both types of analyses to be conducted.


All documents and papers that report research results obtained using the DGAZE dataset should cite the below paper:
Citation: Isha Dua, Thrupthi Ann John, Riya Gupta and C. V. Jawahar. DGAZE:Diver Gaze Mapping On Road In IROS 2020.


title={DGAZE: Driver Gaze Mapping on Road},
author={Dua, Isha and John, Thrupthi Ann and Gupta, Riya and Jawahar, CV},
booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020},

Download Dataset

The DGAZE dataset is available free of charge for research purposes only. Please use the link below to download the dataset (password protected zip file). We request you to send a mail to Thrupthi Ann John and Isha Dua (contact details below) with the duly filled dataset agreement form, upon which we will send you the password. .
Disclaimer: While every effort has been made to ensure accuracy, DGaze database owners cannot accept responsibility for errors or omissions.
[Download Dataset Agreement ] [Download Dataset] [Download README]


We would like to thank Akshay Uttama Nambi and Venkat Padmanabhan from Microsoft Research for providing us with resources to collect DGAZE dataset. This work is partly supported by DST through the IMPRINT program. Thrupthi Ann John is supported by Visvesvaraya Ph.D. fellowship.


For any queries about the dataset, please contact the authors below:
Isha Dua: This email address is being protected from spambots. You need JavaScript enabled to view it.
Thrupthi Ann John: This email address is being protected from spambots. You need JavaScript enabled to view it.