CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Deep Feature Embedding for Accurate Recognition and Retrieval of Handwritten Text


Abstract

We propose a deep convolutional feature representation that achieves superior performance for word spotting and recognition for handwritten images. We focus on :- (i) enhancing the discriminative ability of the convolutional features using a reduced feature representation that can scale to large datasets, and (ii) enabling query-by-string by learning a common subspace for image and text using the embedded attribute framework. We present our results on popular datasets such as the IAM corpus and historical document collections from the Bentham and George Washington pages. On the challenging IAM dataset, we achieve a state of the art mAP of 91.58% on word spotting using textual queries and a mean word error rate of 6.69% for the word recognition task.

 

Motivation MatchingHW 2


Major Contributions

  • Improve the discriminative ability of deep CNN features using HWNet[1] and achieving state of the art results for handwritten word spotting and recognition tasks on IAM, George Washington and Bentham handwritten pages.
  • Learning a reduced feature representation that can scale large datasets.
  • Enabling query-by-string by learning a common subspace for image and text using the embedded attribute framework.

Qualitative Results

qualResults


Related Publications

  • Praveen Krishnan,  Kartik Dutta and C.V Jawahar - Deep Feature Embedding for Accurate Recognition and Retrieval of Handwritten Text, 15th International Conference on Frontiers in Handwriting Recognition, Shenzhen, China (ICFHR), 2016. [PDF]


Codes

Link

 

Models

Pre Trained CNN Models

Pre Trained Attribute Models


References

[1] Praveen Krishnan and C.V Jawahar, "Matching Handwritten Document Images", ECCV 2016.

[2] J. Almazán, A. Gordo, A. Fornés, and E. Valveny, "Word spotting and recognition with embedded attributes," PAMI,2014.

[3] Praveen Krishnan and C.V Jawahar, "Generating Synthetic Data for Text Recognition", arXiv:1608.04224, 2016

 

Associated People

  • Praveen Krishnan *
  • Kartik Dutta *
  • C.V. Jawahar

* Equal Contribution

Information Retrieval from Large Document Image Collections


Motivation

 

MotivationImage

 

We focuses on the challenging problem of Information Retrieval from large document image collections. We propose to develop algorithms and approaches that are scalable to large datasets. We use and extend ideas from machine learning (ML), information retrieval (IR) and computer vision (CV) for this task. Our results are expected to impact the way retrieval is carried out from document images (documents which have textual content and image format). Effective retrieval systems built over textual content (often crawled from web) have changed the way we look at multimedia collections. Since we work on images, traditional IR solutions are not directly applicable. Popular approach is to recognize (e.g. with an OCR) the images and build a textual representation. However recognizers can be brittle and result in noisy outputs in many practical settings (e.g. historic documents, handwritten documents, Indian language documents etc.). We design representations that can scale to millions of document images seamlessly from a small corpus of annotated datasets.


Word Image Retrieval using Bag of Visual Words

 

BoVW

 

 

In this work, we present a Bag of Visual Words (BoVW) based approach to retrieve similar word images from a large database, efficiently and accurately. We show that a text retrieval system can be adapted to build a word image retrieval solution. This helps in achieving scalability. We demonstrate the method on more than 1 Million word images with a sub-second retrieval time. We validate the method on four Indian languages, and report a mean average precision of more than 0.75. To address the lack of spatial structure in the BoVW representation, we re-rank the retrieved list.

 

Key Features

  • Language independent system : Demonstrated on 5 different languages.
  • Scalable to huge datasets : Demonstrated on 1 Million images.
  • Handles noisy document images : Demonstrated on dataset for which Commercial OCRs fail.

Related Publications

  • Ravi Sekhar and C V Jawahar - Word Image Retrieval Using Bag of Visual words Proceedings of 10th IAPR International Workshop on Document Analysis Systems 27-29 Mar. 2012, ISBN 978-1-4673-0868-7, pp. 297-301, Queensland, Australia. [PDF] [Poster] [bibtex]

 


 

Content Level Access to Digital Library of India Pages

 

In this work, we propose a framework for content level access to the scanned pages of Digital Library of India (DLI). We propose a search scheme which fuses noisy OCR output and holistic visual features for content level access to the DLI pages. Visual content is captured using Bag of Visual Words (BoVW) approach. The fusion scheme improves over the individual methods in terms of mean Average Precision (mAP) and mean precision at 10 (mPrec@10). We exploit the fact that OCR has a high precision while BoVW has a high recall.

Digital Library of India

Digital Library of India (DLI) has emerged as one of the largest collections of document images in Indian scripts. DLI, as a part of Million Book Project (MBP), has contributed to the free access of knowledge to Billions of people. In addition, it also helped in digitally archiving the rare and precious books in many of the Indian languages. All these digital contents are stored as scanned images of printed documents. A major challenge presently faced bythe DLI is the lack of content level access to the individual pages.

 


 

Results


 

 


Associated People

  • Mallikarjun B R
  • Visesh Chari
  • C. V. Jawahar
  • Akshay Asthana

 

Matching Handwritten Document Images


Abstract

We address the problem of predicting similarity between a pair of handwritten document images written by different individuals. This has applications related to matching and mining in image collections containing handwritten content. A similarity score is computed by detecting patterns of text re-usages between document images irrespective of the minor variations in word morphology, word ordering, layout and paraphrasing of the content. Our method does not depend on an accurate segmentation of words and lines. We formulate the document matching problem as a structured comparison of the word distributions across two document images. To match two word images, we propose a convolutional neural network based feature descriptor. Performance of this representation surpasses the state-of-the-art on handwritten word spotting. Finally, we demonstrate the applicability of our method on a practical problem of matching handwritten assignments.


Problem Statement

In this work, we compute a similarity score by detecting patterns of text re-usages across documents written by different individuals irrespective of the minor variations in word forms, word ordering, layout or paraphrasing of the content.

 

Motivation MatchingHW

 

 

Major Contribution:

  • To address the lack of data for training handwritten word images, we build a synthetic handwritten dataset of 9 million word images referred as IIIT-HWS dataset.
  • We report a 56% error reduction in word spotting task on the challenging dataset of IAM and pages from George Washington collection.
  • We also propose a normalized feature representation for word images which is invariant to different inflectional endings or suffixes present in words.
  • We demonstrate two immediate applications (i) searching handwritten text from instructional videos, and (ii) comparing handwritten assignments.

IIIT-HWS dataset

 

IIIT HWS

Generating synthetic images is an art which emulates the natural process of image generation in a closest possible manner. In this work, we exploit such a framework for data generation in handwritten domain. We render synthetic data using open source fonts and incorporate data augmentation schemes. As part of this work, we release 9M synthetic handwritten word image corpus which could be useful for training deep network architectures and advancing the performance in handwritten word spotting and recognition tasks.

Download link:

Description Download Link File Size
Readme file Readme 4.0 KB
IIIT-HWS image corpus IIIT-HWS 32 GB
Ground truth files GroundTruth 229 MB

 

 

Please cite the below paper in case you are using the dataset.

     

    HWNet Architecture

    HWNet

     

     

     

    Measure of document similarity (MODS)

     

    MODS

     


    Datasets and Codes

    Please contact author


    Related Papers

    • Praveen Krishnan and C.V Jawahar - Matching Handwritten Document Images, The 14th European Conference on Computer Vision (ECCV) – Amsterdam, The Netherlands, 2016. [PDF]

     


    Contact

    • Praveen Krishnan

     

    Currency Recognition on Mobile Phones


    currecy

    Abstract

    In this paper, we present an application for recognizing currency bills using computer vision techniques, that can run on a low-end smartphone. The application runs on the device without the need for any remote server. It is intended for robust, practical use by the visually impaired. Though we use the paper bills of Indian National Rupee as a working example, our method is generic and scalable to multiple domains including those beyond the currency bills. Our solution uses a visual Bag of Words (BoW) based method for recognition. To enable robust recognition in a cluttered environment, we first segment the bill from the background using an algorithm based on iterative graph cuts. We formulate the recognition problem as an instance retrieval task. This is an example of fine-grained instance retrieval that can run on mobile devices. We evaluate the performance on a set of images captured in diverse natural environments, and report an accuracy of 96.7% on 2584 images.


    Downloads and links

    • Paper [PDF]    Poster [PDF]
    • Code
    • Dataset for Indian National Rupee (1.2GB)
    • Newspaper Coverage #1NEW!   Newspaper Coverage #2
    • Android App : SeeTheMoney v1.0 NEW!

    Using the App :

    1. Install the app using APK
    2. On first use, wait for sometime while the app copies files.

    Demo:


    Publication

    • Suriya Singh, Shushman Choudhury, Kumar Vishal and C.V. Jawahar - Currency Recognition on Mobile Phones Proceedings of the 22nd International Conference on Pattern Recognition, 24-28 Aug 2014, Stockholm, Sweden. [PDF]

    Panoramic Stereo Videos Using a Single Camera


    Abstractabstract

    We present a practical solution for generating 360° stereo panoramic videos using a single camera. Current approaches either use a moving camera that captures multiple images of a scene, which are then stitched together to form the final panorama, or use multiple cameras that are synchronized. A moving camera limits the solution to static scenes, while multi-camera solutions require dedicated calibrated setups. Our approach improves upon the existing solutions in two significant ways: It solves the problem using a single camera, thus minimizing the calibration problem and providing us the ability to convert any digital camera into a panoramic stereo capture device. It captures all the light rays required for stereo panoramas in a single frame using a compact custom designed mirror, thus making the design practical to manufacture and easier to use. We analyze several properties of the design as well as present panoramic stereo and depth estimation results.


    Primary Challenges

    • To capture all the light rays corresponding to both eyes' views without causing blind spots or occlusions in the panoramas created.
    • To design an optical system which is not bulky, easy to calibrate and use, as well as simple to manufacture.
    • To be able to capture 360° stereo panoramas using a single digital camera for immersive human experience.
    • To be able to perceive depth correctly from the generated stereo panoramas.

    Major Contributions

    • We proposed a custom designed mirror surface which we cal as "coffee-filter mirror", for generating 360° stereo panoramas. Our optical system has the following advantages over the other stereo panoramic devices:
    • Simplicity of Data Acquisition
    • Ease of Calibration and Post Processing
    • Adaptability to Various Applications:
    • We have optimised the surface equations of the mirror, and calibrated it to avoid any visual mis-perceptions in 3D like virtual parallax or mis-alignments.
    • Our design is easy to manufacture and the size can be scaled up/ down according to the application. the resolution of the created panoramas improves with the sensor.
    • While designed with human consumption in mind, the stereo pairs could also be used for depth estimation.

    Datasets

    We used PovRay, a freely available ray tracing software which accurately simulates imaging by tracing rays through a given scene. We have used 3D scene datasets listed below to demonstrate how the proposed mirror is used to create stereo panoramas.

    The datasets used for the simulation can be downloaded from the following links:

    • Childhood
    • The Average office
    • Patio
    • Domino

    Please mail us at {This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.}@research.iiit.ac.in for any queries.


    Results

    anag office

    anag patio

    anag TRAVIESO

     Red-Cyan anaglyph panoramas obtained by using the proposed set up using POVRay datasets.

     

    360° stereo view of the Patio Scene captured using coffee-filter mirror. The scene can be seen using any HMD. More videos will be added soon.

     

     image5

    Comparison of reconstructed depth map as obtained using the proposed set up (b) with the ground truth depth map (a)


     

    Please visit PanoStereo for more videos and results.


    Related Publications

    Rajat Aggarwal*, Amrisha Vohra*, Anoop M. Namboodiri - Panoramic Stereo Videos Using A Single Camera, IEEE Conference on Computer Vision & Pattern Recognition (CVPR), 26 June-1st July 2016. [PDF]


    Associated People

    • Rajat Aggarwal
    • Amrisha Vohra
    • Anoop M. Namboodiri

     

    * Equal Contribution

     

    More Articles …

    1. First Person Action Recognition
    2. Face Fiducial Detection by Consensus of Exemplars
    3. Medical Image Perception
    4. Fine-Grained Descriptions for Domain Specific Videos
    • Start
    • Prev
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • Next
    • End
    1. You are here:  
    2. Home
    3. Research
    4. Projects
    5. CVIT Projects
    Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.