CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Towards Safer and Understandable Driver Intention Predictions

 

Mukilan Karuppasamy1, Shankar Gangisetty1, Shyam Nanadan Rai2, Carlo Masone2, C V Jawahar1

1IIIT Hyderabad, India and Politecnico di Torino2, Italy

 

[Paper] [Code & Dataset] 

 

 

Illustration of an AD scenario for the DIP task. An AD system may intend to take a left turn while encountering a parked or slow-moving vehicle at the turn. Existing DIP models, lacking HCI understanding, might fail to anticipate the obstacle, leading to a potential collision. In contrast, an interpretable model can assess the situation through explainable interactions, adjust its manoeuvre, and safely navigate the turn. Towards this, we propose the VCBM model for DIP, incorporating one or more ego-vehicle explanations to enhance decision-making transparency.

Abstract

Autonomous driving (AD) systems are becoming increasingly capable of handling complex tasks, largely due to recent advances in deep learning and AI. As the interactions between autonomous systems and humans grow, the interpretability of driving system decision-making processes becomes crucial for safe driving. Successful human-machine interaction requires understanding the underlying representations of the environment and the driving task, which remains a significant challenge in deep learning-based systems. To address this, we introduce the task of interpretability in maneuver prediction before they occur for driver safety, i.e., driver intent prediction (DIP), which plays a critical role in AD systems. To foster research in interpretable DIP, we curate the explainable Driving Action Anticipation Dataset (DAAD-X), a new multimodal, ego-centric video dataset to provide hierarchical, high-level textual explanations as causal reasoning for the driver’s decisions. These explanations are derived from both the driver’s eye-gaze and the ego-vehicle’s perspective. Next, we propose Video Concept Bottleneck Model (VCBM), a framework that generates spatio-temporally coherent explanations inherently, without relying on post-hoc techniques. Finally, through extensive evaluations of the proposed VCBM on DAAD-X dataset, we demonstrate that transformer-based models exhibit greater interpretability compared to conventional CNN-based models. Additionally, we introduce a multilabel t-SNE visualization technique to illustrate the disentanglement and causal correlation among multiple explanations.

The DAAD-X Dataset

Table. Comparison of datasets for visual place recognition.Our dataset is a subset of DAAD dataset and includes additional categories of explanations for multi-modal videos, encompassing both in-cabin (Aria eye-gaze) and out-cabin (ego-vehicle) perspectives.

 

DAAD-X Data Annotation

 

data distribution ego gaze explanations 

Figure. Driving video annotation statistics of DAAD-X dataset. Illustrating the distribution of (left) ego-vehicle explanations and (right) eye-gaze explanations across different maneuver actions. Detailed explanation categories are provided in the supplementary material. Better viewed in zoomed-in mode for clarity.

 

Video Concept Bottleneck Model (VCBM)

 

method overview 

Figure. Overall architecture of the proposed VCBM.  The dual video encoder first generate the spatio-temporal features (tubelet embeddings) for the video pair of ego-vehicle and gaze input sequence. These, tubelets are concatenated along the channel dimension and then fed into the proposed learnable token merging block to produces K-cluster centers based on composite distances. These clusters are then fed into localised concept bottleneck to disentangled and predict the maneuver label and one or more explanations to justify the maneuver decision.

 

Results

Table.Evaluation of baselines on DAAD-X dataset with (wB) and without (woB) bottleneck. Here, LTM indicates Learnable Token Merging.

daadx5

Table. Gaze modality input variants. Having the gaze cropped regions suits better than the usual way of overlaying gaze in DIP task. 

daadx6

gradcam 

Figure.GradCAM visualization on proposed method.At t = 1, the activations are scattered, but as time progresses to t = T , the CAM gradually refines and localise on important objects. This represents how humans make-decision, which evolves over time.

 

Citation

@in proceedings{vcbm2025daadx, 
author = {Mukilan Karuppasamy, Shankar Gangisetty, Shyam Nandan Rai, Carlo Masone, and C. V. Jawahar}, title = {Towards Safer and Understandable Driver Intention Prediction},
booktitle = {ICCV},
year = {2025},
}

 

Acknowledgements

This work is supported by iHub-Data and Mobility at IIIT Hyderabad.

Pedestrian Intention and Trajectory Prediction in Unstructured Traffic Using IDD-PeD

 

Ruthvik Bokkasam1, Shankar Gangisetty1, A. H. Abdul Hafez2,

C V Jawahar1

IIIT Hyderabad, India1, India and King Faisal University2, Saudi Arabia

[Paper Link] [ Code & Dataset ] 

 iddped1

Fig. 1: Illustration of pedestrian intention and trajectory encountering various challenges within our unstructured traffic IDD-PeD dataset. The challenges include occlusions, signalized types, vehicle-pedestrian interactions, and illumination changes. Intent of C: Crossing with trajectory and NC: Not Crossing. 

Abstract

With the rapid advancements in autonomous driving, accurately predicting pedestrian behavior has become essential for ensuring safety in complex and unpredictable traffic conditions. The growing interest in this challenge highlights the need for comprehensive datasets that capture unstructured environments, enabling the development of more robust prediction models to enhance pedestrian safety and vehicle navigation. In this paper, we introduce an Indian driving pedestrian dataset designed to address the complexities of modeling pedestrian behavior in unstructured environments, such as illumination changes, occlusion of pedestrians, unsignalized scene types and vehicle-pedestrian interactions. The dataset provides high-level and detailed low-level comprehensive annotations focused on pedestrians requiring the ego-vehicle’s attention. Evaluation of the state-of-the-art intention prediction methods on our dataset shows a significant performance drop of up to 15%, while trajectory prediction methods underperform with an increase of up to 1208 MSE, defeating standard pedestrian datasets. Additionally, we present exhaustive quantitative and qualitative analysis of intention and trajectory baselines. We believe that our dataset will open new challenges for the pedestrian behavior research community to build robust models.

 

The IDD-PeD dataset

Table 1: Comparison of datasets for pedestrian behavior understanding. On-board diagnostics (OBD) provides ego-vehicle speed, acceleration, and GPS information. Group annotation represents the number of pedestrians moving together, in our dataset, about 1, 800 move individually while the rest move in groups of 2 or more. Interaction annotation refers to a label between ego-vehicle and pedestrian, where both influence each other’s movements and decisions. ✓and ✗ indicate the presence or absence of annotated data.

 

Fig. 2: Annotation instances and data statistics of IDD-PeD. Distribution of (a) frame-level ego-vehicle speeds, (b) pedestrian at signalized types such as crosswalk (C), signal (S), crosswalk and signal (CS), and absence of crosswalk and signal (NA), (c) pedestrians with track lengths at day and night, (d) frame-level different behavior analysis and traffic objects annotation, and (e) pedestrian occlusions.

Results

Pedestrian Intention Prediction (PIP) Baselines

Table 2: Evaluation of PIP baselines on JAAD, PIE, and our datasets.

Pedestrian Trajectory Prediction (PTP) Baselines

Table 3: Evaluation of PTP baselines on JAAD, PIE and our datasets. We report MSE results at 1.5s. “-” indicates no results as PIETraj needs explicit ego-vehicle speeds.

iddped6

Fig.: Qualitative evaluation of the best and worst PTP models on our dataset. Red: SGNet, Blue: PIETraj, Green: Ground truth, White: Observation period. To better illustrate and highlight key factors in PIP and PTP methods, a qualitative analysis will be provided in the supplementary video.

 

Citation

@inproceedings{idd2025ped, 
author = {Ruthvik Bokkasam, Shankar Gangisetty, A. H. Abdul Hafez, C. V. Jawahar},
title = {Pedestrian Intention and Trajectory Prediction in Unstructured Traffic Using IDD-PeD},
book title = {ICRA},
publisher = {IEEE},
year = {2025},
}

 

Acknowledgements

This work is supported by iHub-Data and Mobility at IIIT Hyderabad.

Visual Place Recognition in Unstructured Driving Environments

Utkarsh Rai1, Shankar Gangisetty1, A. H. Abdul Hafez1,

Anbumani Subramanian1, C V Jawahar1

1IIIT Hyderabad, India

 

[ Paper ]     [ Code ]     [ Dataset ] 

001

Fig. 1: Illustration of visual place recognition encountering various challenges across the three routes within our unstructured driving VPR dataset. The challenges include occlusions, traffic density changes, viewpoint changes, and variations in illumination.

Abstract

The problem of determining geolocation through visual inputs, known as Visual Place Recognition (VPR), has attracted significant attention in recent years owing to its potential applications in autonomous self-driving systems. The rising interest in these applications poses unique challenges, particularly the necessity for datasets encompassing unstructured environmental conditions to facilitate the development of robust VPR methods. In this paper, we address the VPR challenges by proposing an Indian driving VPR dataset that caters to the semantic diversity of unstructured driving environments like occlusions due to dynamic environments, variations in traffic density, viewpoint variability, and variability in lighting conditions.  In unstructured driving environments, GPS signals are unreliable, often affecting the vehicle to accurately determine location. To address this challenge, we develop an interactive image-to-image tagging annotation tool to annotate large datasets with ground truth annotations for VPR training. Evaluation of the state-of-the-art methods on our dataset shows a significant performance drop of up to 15%, defeating a large number of standard VPR datasets. We also provide an exhaustive quantitative and qualitative experimental analysis of frontal-view, multi-view, and sequence-matching methods. We believe that our dataset will open new challenges for the VPR research community to build robust models. The dataset, code, and tool will be released on acceptance.

The IDD-VPR dataset

Data Capture and Collection

Table 1: Comparison of datasets for visual place recognition. Total length is the coverage multiplied by the number of times each route was traversed. Time span is from the first recording of a route to the last recording.

002

003

Fig. 2: Data collection map for the three routes. The map displays the actual routes (in blue color) taken and superimposed with maximum GPS drift due to signal loss (dashed lines in red color). This GPS inconsistency required manual correction.

Data Annotation

During data capture ensuring consistency and error-free GPS reading for all three route traversals was challenging as shown in Fig. 2. Through our image-to-image tagging annotation process, we ensured the consistency of each location being tagged with the appropriate GPS readings, maintaining a mean error of less than 10 meters. We developed an image-to-image matching annotation tool as presented in Fig. 3.

 

009

 

 Fig. 3: Image-to-image annotation tool for (query, reference) pair matching by the annotators with GPS tagging.

004

Fig. 4: Data capture span. Left: based on months and Right: diversity of samples encompasses different weather conditions, including overcast (Sep’23, Oct’23), winter (Dec’23, Jan’24), and spring (Feb’24).

 

 

 

Results

Frontal-View Place Recognition

Table 2: Evaluation of baselines on Frontal-View datasets inclusive of IDD-VPR: Report overall recall@1, split by utilized backbone and descriptor dimension of 4096-D.

006

 

Multi-View Place Recognition

Table 2: Evaluation of baselines on Multi-View datasets inclusive of IDD-VPR: Report overall recall@1, split by utilized backbone and descriptor dimension of 4096-D.

007

 

008

Fig. 5: Qualitative comparison of baselines on our dataset. The first column comprises query images of unstructured driving environmental challenges, while the subsequent columns showcase the retrieved images for each of the methods. Green: true positive; Red: false positive. 

Citation

@inproceedings{idd2024vpr,
  author       = {Utkarsh Rai, Shankar Gangisetty, A. H. Abdul Hafez, Anbumani Subramanian and
                  C. V. Jawahar},
  title        = {Visual Place Recognition in Unstructured Driving Environments},
  booktitle    = {IROS},
  pages        = {10724--10731},
  publisher    = {IEEE},
  year         = {2024}
}

Acknowledgements

This work is supported by iHub-Data and Mobility at IIIT Hyderabad.

Early Anticipation of Driving Maneuvers

Abdul Wasi1, Shankar Gangisetty1, Shyam Nanadan2, C V Jawahar1

IIIT Hyderabad1, India and Politecnico di Torino2, Italy

[Paper]       [Code]     [Dataset]

 

daad1

Fig.: Overview of DAAD dataset for ADM task. Left: Shows previous datasets containing maneuver videos from their initiation to their execution (DM), whereas our DAAD dataset features longer video sequences providing prior context (BM), which proves beneficial for early maneuver anticipation. Right: Illustrates the multi-view and multi-modality (Gaze through the egocentric view) in DAAD for ADM.

Abstract

Prior works have addressed the problem of driver intention prediction (DIP) to identify maneuvers post their onset. On the other hand, early anticipation is equally important in scenarios that demand a preemptive response before a maneuver begins. However, there is no prior work aimed at addressing the problem of driver action anticipation before the onset of the maneuver, limiting the ability of the advanced driver assistance system (ADAS) for early maneuver anticipation. In this work, we introduce Anticipating Driving Maneuvers (ADM), a new task that enables driver action anticipation before the onset of the maneuver. To initiate research in this new task, we curate Driving Action Anticipation Dataset, DAAD, that is multi-view: in- and out-cabin views in dense and heterogeneous scenarios, and multimodal: egocentric view and gaze information. The dataset captures sequences both before the initiation and during the execution of a maneuver. During dataset collection, we also ensure to capture wide diversity in traffic scenarios, weather and illumination, and driveway conditions. Next, we propose a strong baseline based on transformer architecture to effectively model multiple views and modalities over longer video lengths. We benchmark the existing DIP methods on DAAD and related datasets. Finally, we perform an ablation study showing the effectiveness of multiple views and modalities in maneuver anticipation.

The DAAD dataset

 

daad2

 Fig.: Data samples. DAAD in comparison to Brain4Cars, VIENA2 and HDD datasets. DAAD exhibits great diversity in various driving conditions (traffic density, day/night, weather, type of routes) across different driving maneuvers. 

Results

 

daad3

Fig.: Effect of time-to-maneuver. (a) Accuracy over time for different driving datasets on CEMFormer (with ViT encoder). We conducted three separate experiments for the DAAD dataset. (i) DAAD-DM: Training and testing only on the maneuver sequences (DM). (ii) DAAD-Full: Training and testing on the whole video. (iii) DAAD-BM: Training on a portion of the video captured before the onset of maneuver (BM) and testing on the whole video; (b) Accuracy over time for our dataset on ViT, MViT, MViTv2 encoders, and the proposed method (M2MVT). Here, t is the time of onset of the maneuver.

Dataset

  • Driver View
  • Front View
  • Gaze View
  • Left View
  • Rear View
  • Right View

Citation

@inproceedings{adm2024daad,
  author       = {Abdul Wasi, Shankar Gangisetty, Shyam Nandan Rai and C. V. Jawahar},
  title        = {Early Anticipation of Driving Maneuvers},
  booktitle    = {ECCV (70)},
  series       = {Lecture Notes in Computer Science},
  volume       = {15128},
  pages        = {152--169},
  publisher    = {Springer},
  year         = {2024}
}

 

Acknowledgments

This work is supported by iHub-Data and mobility at IIIT Hyderabad.

 

 

TBD

More Articles …

  1. IndicSTR12: A Dataset for Indic Scene Text Recognition
  2. RoadTextVQA
  3. Towards MOOCs for Lipreading: Using Synthetic Talking Heads to Train Humans in Lipreading at Scale
  4. FaceOff: A Video-to-Video Face Swapping System
  • Start
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. Projects
  5. CVIT Projects
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.