CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
  • Research
    • Publications
    • Journals
    • Books
    • MS Thesis
    • PhD Thesis
    • Projects
    • Resources
  • Events
    • Summer School 2026
    • Talks and Visits
    • Major Events
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Past Announcements
  • Contact Us

Enhancing Driving Visibility via Semantic-Guided Knowledge Distillation Framework for Adverse Weather Removal

 

Hanvitha Saraswathi Mukkamala, Shankar Gangisetty, Ananya Kulkarni,

Veera Ganesh Yalla, C V Jawahar

IIIT Hyderabad

 

[Paper] [Code]

 


weatherremoval1

 Figure1. Illustration of our framework on ADAS vehicle in a rainy driving scenario for Adverse Weather Removal.

Abstract

Adverse weather such as rain, haze, and low light severely degrades visual perception in Advanced Driver Assistance Systems (ADAS) and autonomous driving, leading to degraded scene understanding and increased safety risks. We propose a unified, semantic-guided knowledge distillation restoration framework that addresses multi- weather removal while preserving semantics. Our method employs a semantic-guided dual-decoder architecture trained via two-stage multi-teacher knowledge distillation, transferring expertise from multiple high-capacity models into a lightweight student model. Segmentation-aware contrastive learning further aligns low-level restoration with high-level semantic structure, enabling robust detection of roads, vehicles, and pedestrians under challenging conditions. Trained on a mix of synthetic and real-world data with segmentation-guided feature refinement, our framework gener- alises effectively to real-world unseen environments. Extensive experiments on multiple benchmarks show competitive or superior performance to state-of-the-art methods, with real-time inference suitable for edge deployment. This makes our approach well-suited for safety-critical perception in autonomous and semi-autonomous systems operating in adverse outdoor environments.

 Methodology

Our method builds a unified lightweight student model for adverse weather removal by distilling knowledge from multiple weather-specific teacher networks. The key idea is to combine image restoration with semantic guidance, so the model not only improves visibility but also preserves important scene structures such as roads, vehicles, and pedestrians.

weatherremoval2

 

 Figure2. Overview of our semantic-guided knowledge distil- lation framework. Collaborative distillation from multiple weather-specific teachers (rain, haze) to a unified student via two stages: (1) Knowledge Collation, where soft alignment with teacher reconstructions and segmentation priors, and (2) Knowledge Examination, where enforcing ground-truth consistency with hard constraints. Segmentation maps provide region-aware supervision, emphasizing critical structures like roads and vehicles.

At inference time, only the lightweight student model is used. This makes the framework efficient, memory-friendly, and suitable for real-time deployment without requiring any teacher model during testing.

 

QUANTITATIVE RESULTS

  Our method achieves strong performance across both synthetic and real-world adverse weather datasets, showing consistent improvements in restoration quality while preserving semantic structure. 

weatherremoval3

 Table 1:Quantitative evaluation of image dehazing performance on synthetic and real datasets. 

 

weatherremoval4

 Table 1:Quantitative evaluation of image dehazing performance on synthetic and real datasets. 

 

weatherremoval5

 Table 3:Quantitative comparison on real-world derained and dehazed images from IDD-AW using NIQE and BRISQUE. 

 

QUALITATIVE RESULTS

weatherremoval6

 Figure 3: Visual comparison of the proposed and existing methods from real datasets (Raindrop, SPA ,O-HAZE ) for multi-weather restoration. The image can be zoomed in for improved visualization. 

 

weatherremoval7

 Figure 4: Visual comparison of the proposed and existing methods from synthetic datasets (Outdoor-Rain , RESIDE) for multi-weather restoration. The image can be zoomed in for improved visualization. 

 
weatherremoval8

 Figure 5: Qualitative comparison of real-world rain and haze images with low-light from IDD-AW dataset. 

 

Citation


@in proceedings{Enhancing2025icvgip,
author = {Hanvitha Saraswathi Mukkamala, Shankar Gangisetty, Ananya Kulkarni, Veera Ganesh Yalla and C. V. Jawahar}, title = {Enhancing Driving Visibility via Semantic-Guided Knowledge Distillation Framework for Adverse Weather Removal},
booktitle = {},
series = {},
volume = {},
pages = {},
publisher = {},
year = {2025},
}

 

Acknowledgements

This work is supported by iHub-Data and Mobility at IIIT Hyderabad.

 

 

Sketchtopia: A Dataset and Foundational Agents for Benchmarking Asynchronous Multimodal Communication with Iconic Feedback

 

Mohd Hozaifa Khan, Ravi Kiran Sarvadevabhatla

IIIT Hyderabad|CVIT

The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025

 

[Paper] [CVPR Paper] [Dataset (WIP)] [Demo ]

Audio Overview

 

Audio Overview

Listen to a brief overview of our research

Your browser does not support the audio element.
Download Audio

Abstract

 

We introduce Sketchtopia, a large-scale dataset and AI framework designed to explore goal-driven, multimodal communication through asynchronous interactions in a Pictionary-inspired setup. Sketchtopia captures natural human interactions, including freehand sketches, open-ended guesses, and iconic feedback gestures, showcasing the complex dynamics of cooperative communication under constraints. It features over 20K gameplay sessions from 916 players, capturing 263K sketches, 10K erases, 56K guesses and 19.4K iconic feedbacks.

We introduce multimodal foundational agents with capabilities for generative sketching, guess generation and asynchronous communication. Our dataset also includes 800 human-agent sessions for benchmarking the agents. We introduce novel metrics to characterize collaborative success, responsiveness to feedback and inter-agent asynchronous communication. Sketchtopia pushes the boundaries of multimodal AI, establishing a new benchmark for studying asynchronous, goal-oriented interactions between humans and AI agents.

 

Key Contributions

Rich Dataset

Large-scale, multimodal data capturing real-world asynchronous sketching dynamics.

Foundational Agents

DRAWBOT & GUESSBOT designed for asynchronous interaction.

New Metrics

Metrics like AAO, FRS, MATS for evaluation.

 

Dataset Highlights: Multimodal & Asynchronous

20K+

Sessions

Rich collection capturing diverse human Pictionary gameplay.

263K+

Sketches

Massive corpus of iterative freehand drawings for visual communication.

56K+

Open-ended Guesses

Natural language guesses reflecting understanding of visual cues.

19K+

Iconic Feedback

Non-verbal cues (👍👎❓) guiding the collaborative process asynchronously.

916

Players

Data from a diverse participant group ensuring robust analysis.

800

Human-Agent Sessions

Valuable data from humans interacting with our agents.

 

 Methodology

 

distil2

 

 Figure. Our proposed framework for the DIP task. DriveXplain generates natural language explanations alongside maneuvers and Explanation Distillation distills these explanations into a single MLLM to enhance DIP performance at inference.

 

 

 

Key Highlights:

 

  • New Task: Understanding why a driver makes a decision is just as important as predicting what they’ll do next. 
  • DriveXplain Model: We introduce a zero-shot framework that enhances MLLMs for ADAS by embedding driving-specific context directly into their reasoning.
  •  Knowledge Distillation: To enable real-time, deployable solutions, we distill reasoning and decision-making capabilities from large MLLMs into smaller, efficient models — paving the way for explainable driving intelligence.

 

 Results

 

data distribution ego gaze explanations 

 

 Table DIP benchmark results. Performance comparison of Driving-specific VLM, General VLMs, Action Anticipation models, and our framework (DriveXplain, ED).Accuracy (Acc.) and F1 (%) on Brain4Cars AIDE, and DAAD datasets. Finetune indicates whether the model was fine-tuned (âś“) or evaluated in a zero-shot (âś—) setting. Bold and underline indicate the best and second-best results. 

 

 distil4

 

 Figure: Qualitative comparison of proposed framework, zero-shot Qwen2.5-VL, Dolphins across Brain4cars, AIDE, and DAAD datasets. We show manoeuvre prediction (what) and explanation (why), with attention heatmaps highlighting key regions. 

 

 

 

Citation

@in proceedings{vcbm2025daadx, 
author = {Sainithin Artham, Avijit Dasgupta, Shankar Gangisetty, and C. V. Jawahar}, title = {Distilling What and Why: Enhancing Driver Intention Prediction w ith MLLMs},
booktitle = {},
series = {},
volume = {},
pages = {},
publisher = {},
year = {2025},
}

 

Acknowledgements

This work is supported by iHub-Data and Mobility at IIIT Hyderabad.

 

 

 

IndicDLP : A Foundational Dataset for Multi-Lingual and Multi-Domain Document Layout Parsing 

ICDAR 2025 (Oral) — 🏆 Best Student Paper Runner-Up Award 

Oikantik Nath1, Sahithi Kukkala2, Mitesh Khapra1, Ravi Kiran Sarvadevabhatla2

IIIT Madras1, IIIT Hyderabad2

[Paper]       [Code]     [Dataset]

 

Indic1

 

Indic2

 

Indic3

 

Indic4

 

 

Samples from the IndicDLP dataset highlighting its diversity across document formats, domains, languages, and temporal span. For improved differentiability, segmentation masks are used instead of bounding boxes to highlight regions more effectively.

 

dataset1

 

 

 

The above figure illustrates the contributions of 12 languages (left) and 12 document domains (right) in the IndicDLP dataset. The distribution is fairly balanced across both categories, with no single language or domain overwhelmingly dominating the dataset. This ensures a diverse and well-represented collection.

 

dataset2

Comparison of modern document layout parsing datasets.

 

Citation

Please cite our paper if you find this dataset or work useful:

@Inproceedings{10.1007/978-3-032-04614-7_2,
  author       = {Oikantik Nath, Sahithi Kukkala, Mitesh Khapra, Sarvadevabhatla, Ravi Kiran},
  editor      = {Xu-Cheng Yin, Dimosthenis Karatzas, and and Daniel Lopresti  },
  title        = {IndicDLP: A Foundational Dataset for Multi-lingual and Multi-domain Document Layout Parsing},
  booktitle    = {Document Analysis and Recognition -- ICDAR 2025},
  year         = {2026}
  publisher    = {Springer Nature Switzerland},
  address      = {Cham},
  pages        = {23--39},
  abstract     = {Document layout analysis is essential for downstream tasks such as information retrieval, 
extraction, OCR, and digitisation. However, existing large-scale datasets like PubLayNet and DocBank lack
fine-grained region labels and multilingual diversity, making them insufficient for representing complex documents
layouts. Human-annotated datasets such as {\$}{\$}M^{\{}6{\}}Doc{\$}{\$}M6Doc and {\$}{\$}{\backslash}text
{\{}D{\}}^{\{}4{\}}{\backslash}text {\{}LA{\}}{\$}{\$}D4LA offer richer labels and greater domain diversity,
but are too small to train robust models and lack adequate multilingual coverage. This gap is especially
pronounced for Indic documents, which encompass diverse scripts yet remain underrepresented in current datasets,
further limiting progress in this space. To address these shortcomings, we introduce IndicDLP, a large-scale
foundational document layout dataset spanning 11 representative Indic languages alongside English and 12 common
document domains. Additionally, we curate UED-mini, a dataset derived from DocLayNet
and {\$}{\$}M^{\{}6{\}}Doc{\$}{\$}M6Doc, to enhance pretraining and provide a solid foundation for Indic layout
models. Our experiments demonstrate that fine-tuning existing English models on IndicDLP significantly boosts
performance, validating its effectiveness. Moreover, models trained on IndicDLP generalise well beyond Indic
layouts, making it a valuable resource for document digitisation. This work bridges gaps in scale, diversity, and
annotation granularity, driving inclusive and efficient document understanding.}  isbn     = {978-3-032-04614-7}

Acknowledgments

Assamese
Yuvaraj - Superchecker
Rondeep Bordoloi - Reviewer

Ajit Kumar Sarma - Annotator
Anjali Steephan - Annotator
Madhutrishna Chetia - Annotator
Riya Chutia - Annotator
Ruh Ullah Khan - Annotator

Bengali
Praneeth Reddy - Superchecker
Rondeep Bordoloi Reviewer

Gargi Mukherjee Kolley - Annotator
Madhumita Pal - Annotator
Priyanjana Banerjee - Annotator
Soupat Biswas - Annotator
Sushmita Pal - Annotator

English
Hemavardhini R - Superchecker
Yuvaraj - Superchecker
Ragavan S - Reviewer

Ghiridharan M G - Annotator
Munish Mangla - Annotator
Rubeena - Annotator
Vidhya J G - Annotator


Gujarati
Praneeth Reddy - Superchecker
Kaniz Fatema - Reviewer

Bhargav Bhatt - Annotator
Kinjal Joshi - Annotator
Naman Mehta - Annotator
Parth B - Annotator
Parthiv Makwana - Annotator
Shreya Parmar - Annotator
Vama Soni - Annotator

Hindi
Hemavardhini R - Superchecker
Puru Koli - Reviewer

Adiba Khan - Annotator
Anima Chetry - Annotator
Arati Giri - Annotator
Ashish Kumar Jha - Annotator
Bhakti Rai - Annotator
Furtengi Sherpa - Annotator
Keshav Prasad Sapkota - Annotator
Nilesh lagade - Annotator
Rushaid Abbas - Annotator

 

Kannada
Hemavardhini R - Superchecker
Ragavan S - Reviewer
Ramya - Reviewer
Sreejanani Sanke - Reviewer

Charulatha S - Annotator
Nandini Vijay - Annotator
Rajeshwari Lakkannavar - Annotator
Suma Girish - Annotator
Vidya Kulkarni - Annotator
Virat Kumar Pandey - Annotator


Malayalam
Neha Bandekar - Superchecker
Ramya - Reviewer
Swetha - Reviewer

ABHINAV P M - Annotator
Amal I C - Annotator
Nadha rashada S V - Annotator
SANJAY.R - Annotator
Sreelekshmi S - Annotator

Marathi
Neha Bandekar - Superchecker
Nikita Digraskar - Reviewer

Manjunath Renake - Annotator
Nitin Paranjape - Annotator
Sachin Deepak Londhe - Annotator
Tejas Vishnupant Akhare - Annotator

Odia
Neha Bandekar - Superchecker
Harihara Barik - Reviewer

Lalatendu Bidyadhar Das - Annotator
Rajat Kumar patra - Annotator
Satyabrat Badajena - Annotator
Sradhanjali Pradhan - Annotator


Punjabi
Yuvaraj - Superchecker
Saranpal Singh - Reviewer

HarvinderSingh GurmeetSingh Ragi - Annotator
Inderpreet - Annotator
Jaydeep Singh Shahu - Annotator
Lovepreet Singh - Annotator
Niharika Khanna - Annotator
Sukhpreet Kaur - Annotator

Tamil
Hemavardhini R - Superchecker
Swetha - Reviewer

Bensha Joyson - Annotator
N. Gana Priyan - Annotator
N.Indupriya - Annotator

Telugu
Praneeth Reddy - Superchecker
Sreejanani Sanke - Reviewer

Deepika Senapathi - Annotator
Ediga Sivakumar Goud - Annotator
Naresh Nune - Annotator
Vakkapati Divyasri - Annotator
Vani Bhaskar - Annotator


 

 

We would like to acknowledge the support from Indian Institute of Technology, Madras, India and International Institute of Information Technology Hyderabad, India.

 

 

MOTOR: A Multimodal Dataset for Two-Wheeler Rider Behavior Understanding

 

Varun Paturkar, Shankar Gangisetty, C V Jawahar

IIIT Hyderabad

 

[Paper] [Code] [Dataset] [Project Webpage]

 


Motor IMG

 Fig: Comparison of traffic contexts and accident statistics across the Global North and South. Top Row: Four-wheelers dominating in the USA vs Two-wheelers in India. Bottom Row: Distribution of vehicles (two-wheeler vs four-wheeler) and fatal accidents across North and South.

Abstract

Two-wheelers account for a disproportionately high share of road fatalities in the Global South. Research on rider behavior, however, lags far behind four-wheelers, where multimodal datasets have driven major advances in Advanced Driver Assistance Systems (ADAS). To address this gap, we present the MOtorized TwO-wheeler Rider (MOTOR) dataset, the first large-scale, multi-view, multimodal resource dedicated to two-wheelers in dense, unstructured traffic. MOTOR com- prises 2,500 sequences (25+ hours) collected from 16 riders and integrates synchronized front, rear, and helmet videos, rider eye-gaze from wearable trackers, on-road audio, and telemetry (GPS, accelerometer, gyroscope). Rich annotations capture traf- fic context, rider state, 12 riding maneuvers spanning conven- tional and unconventional behaviors, and legality labels (Legal, Illegal, Unspecified). We benchmark rider behavior recognition and maneuver legality classification using state-of-the-art video action recognition backbones (CNN and Transformer-based), extended with multimodal fusion, and find that combining RGB, gaze, and telemetry consistently yields the best performance. MOTOR thus provides a unique foundation for advancing safety-critical understanding of two-wheeler riding. It offers the research community a benchmark to develop and evaluate models for behavior analysis, legality-aware prediction, and intelligent transportation systems. Dataset and code will be made publicly available.

 

 The MOTOR Dataset

 Table: Comparison of 4-wheeler and 2-wheeler behavior datasets. Our dataset is unique as it contains multi-modal, multi-view videos from ego-vehicle and helmet, eye gaze, as well as annotated conventional and unconventional behaviors, and legality-related riding scenarios. Note: CRB indicates conventional riding behaviors, and UCRB means unconventional riding behaviors.

Motor IMG2

 

Motor IMG3

 Fig: Data samples helmet-view. (a) Ego-rider weaves through dense, slow traffic, overtaking multiple vehicles across lanes. (b) Rider squeezes through a narrow gap between a bus and a car, narrowly avoiding the bus. (c) Rider rides in the wrong lane against dense oncoming traffic, disrupting flow. (d) Rider turns head fully toward a roadside building, diverting gaze from the road amid fast-moving traffic.

 

 Results

 Table: Rider Behavior Classification. Comparison of CNN and Transformer-based baselines on MOTOR dataset across different modality combinations.

Motor IMG4

 

 Table: Rider Legality Classification: CNN and Transformer-based baselines on MOTOR dataset across different modality combinations.

Motor IMG5

 

Citation

@in proceedings{motor2026icra, 
author = {Varun Paturkar, Shankar Gangisetty, and C. V. Jawahar}, title = {MOTOR: A Multimodal Dataset for Two-Wheeler Rider Behaviour Understanding},
booktitle = {},
series = {},
volume = {},
pages = {},
publisher = {},
year = {2026},
}

 

Acknowledgements

This work is supported by iHub-Data and Mobility at IIIT Hyderabad.

 

 

PedestrianQA : A Benchmark for Vision-Language Models on Pedestrian Intention and Trajectory Prediction

 

Naman Mishra, Shankar Gangisetty, C V Jawahar

IIIT Hyderabad

 

[Paper] [Code] [Dataset]

 


pedesIMG

 Fig: An illustration of an unstructured-traffic scenario where a pedestrian stands in the middle of the road in front of the ego-vehicle, attempting to cross the road. Unlike prior approaches that provide only predictions, our method predicts the intention and trajectory and generates supporting rationales.

Abstract

Pedestrian intention and trajectory prediction are critical for the safe deployment of autonomous driving systems, directly influencing navigation decisions in complex traffic environments. Recent advances in large vision–language models offer a powerful new paradigm for these tasks by combining high-capacity visual understanding with flexible natural lan- guage reasoning. In this work, we introduce PedestrianQA, a large-scale video-based dataset that formulates pedestrian intention and trajectory prediction as a question–answering tasks augmented with structured rationales. PedestrianQA expresses richly annotated pedestrian sequences, in natural language, enabling VLMs to learn from visual dynamics, contextual cues, and interactions among traffic agents while generating concise explanations of their predictions without needing specialized architectures tailored for each task. Empirical evaluations across PIE, JAAD, TITAN, and IDD-PeD show that finetuning state-of-the-art VLMs on PedestrianQA significantly improves intention classification, trajectory forecasting accuracy, and the quality of explanatory rationales, demonstrating the strong potential of VLMs as a unified and explainable framework for safety-critical pedestrian behavior modeling. The dataset and model will be made publicly available.

 

 PedestrianQA Dataset

 

pedesIMG

 Fig:Data generation pipeline: We first aggregate all ground-truth, human-annotated annotations from the constituent datasets into a unified metadata schema. We use generated VLM captions to enrich motion semantics using carefully designed pedestrian-motion prompts that target fine-grained cues. These captions are validated for format and appended to the metadata. We then construct a single instruction package containing: (i) a system prompt, (ii) task definitions for PIP and PTP, (iii) step-by-step guidance for producing structured, fine-grained rationales, (iv) a small set of in-context exemplars, (v) a compliance checklist for high-quality rationale generation, and (vi) the sequence-level metadata tables. This package is provided to claude-sonnet-4-20250514 LLM-API to generate triplets of questions, answers, and rationales.

 

 Results

pedesIMG

 

pedesIMG

 

pedesIMG

 

 Table: Rationale evaluation on the combined dataset, with Claude-Sonnet-4. Average scores (0–100) for Spatial Reasoning (SR), Temporal Reasoning (TR), Mathematical Reasoning (MR), Ego-Vehicle Reasoning (EVR), Scene-Context Reasoning (SCR), Final Destination Prediction (FDP), and Conclusion (C). ✓ indicates finetuned models, ✗ zero-shot. Bold shows best score per column; underline marks the second-best. Dolphins generates only a brief conclusion and does not generate category-specific rationales.   

pedesIMG 

Citation

@in proceedings{pedqa2026icra, 
author = {Naman Mishra, Shankar Gangisetty, and C. V. Jawahar},
title = {PedestrainQA: A Benchmark for Vision-Language Models on Pedestrian Intention and Trajectory Prediction}, booktitle = {},
series = {},
volume = {},
pages = {},
publisher = {},
year = {2026},
}

 

Acknowledgements

This work is supported by iHub-Data and Mobility at IIIT Hyderabad.

 

 

More Articles …

  1. DriveSafe: A Framework for Risk Detection and Safety Suggestions in Driving Scenarios
  2. Distilling What and Why: Enhancing Driver Intention Prediction with MLLM
  3. TexTAR – Textual Attribute Recognition in Multi-domain and Multi-lingual Document Images
  4. Towards Scalable Sign Production: Leveraging Co-Articulated Gloss Dictionary for Fluid Sign Synthesis
  • Start
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. Projects
  5. CVIT Projects
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.