Reading Between the Lanes: Text VideoQA on the Road

George Tom , Minesh Mathew , Sergi Garcia , Dimosthenis Karatzas , C.V. Jawahar

Center for Visual Information Technology (CVIT), IIIT Hyderabad
Computer Vision Center (CVC), UAB, Spain
AllRead Machine Learning Technologies

ICDAR, 2023

[ Paper ] [ Dataset ]

 

roadtextvqa01

Abstract

 

Text and signs around roads provide crucial information for drivers, vital for safe navigation and situational awareness. Scene text recognition in motion is a challenging problem, while textual cues typically appear for a short time span, and early detection at a distance is necessary. Systems that exploit such information to assist the driver should not only extract and incorporate visual and textual cues from the video stream but also reason over time. To address this issue, we introduce RoadTextVQA, a new dataset for the task of video question answering (VideoQA) in the context of driver assistance. RoadTextVQA consists of 3,222 driving videos collected from multiple countries, annotated with 10,500 questions, all based on text or road signs present in the driving videos. We assess the performance of state-of-the-art video question answering models on our RoadTextVQA dataset, highlighting the significant potential for improvement in this domain and the usefulness of the dataset in advancing research on in-vehicle support systems and text-aware multimodal question answering.

 

 

Contact

This email address is being protected from spambots. You need JavaScript enabled to view it.