CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Canonical Saliency Maps: Decoding Deep Face Models


Thrupthi Ann John[1], Vineeth N Balasubramanian[2] and C.V. Jawahar[1]

IIIT Hyderabad[1] IIT Hyderabad[2]

[ Code ]   | [ Demo Video ]

CMS main

Abstract

As Deep Neural Network models for face processing tasks approach human-like performance, their deployment in critical applications such as law enforcement and access control has seen an upswing, where any failure may have far-reaching consequences. We need methods to build trust in deployed systems by making their working as transparent as possible. Existing visualization algorithms are designed for object recognition and do not give insightful results when applied` to the face domain. In this work, we present `Canonical Saliency Maps', a new method which highlights relevant facial areas by projecting saliency maps onto a canonical face model. We present two kinds of Canonical Saliency Maps: image-level maps and model-level maps. Image-level maps highlight facial features responsible for the decision made by a deep face model on a given image, thus helping to understand how a DNN made a prediction on the image. Model-level maps provide an understanding of what the entire DNN model focuses on in each task, and thus can be used to detect biases in the model. Our qualitative and quantitative results show the usefulness of the proposed canonical saliency maps, which can be used on any deep face model regardless of the architecture.


Demo


Related Publications

Canonical Saliency Maps: Decoding Deep Face Models
Thrupthi Ann John, Vineeth N Balasubramanian and C. V. Jawahar
Canonical Saliency Maps: Decoding Deep Face Models , IEEE Transactions in Biometrics, Behavior and Identity Science 2021 Volume 3, Issue 4. [ PDF ] , [ BibTeX ]

Contact

For any queries about the work, please contact the authors below

  1. Thrupthi Ann John - thrupthi [dot] ann [at] research [dot] iiit [dot] ac [dot] in

3DHumans

High-Fidelity 3D Scans of People in Diverse Clothing Styles

 

00

 

About

3DHumans dataset provides around 180 meshes of people in diverse body shapes in various garments styles and sizes. We cover a wide variety of clothing styles, ranging from loose robed clothing, like saree (a typical South-Asian dress) to relatively tight fit clothing, like shirts and trousers. Along with the high quality geometry (mesh) and texture map, we also provide registered SMPL's parameters. The faces of the subjects are blurred and smoothened out to maintain privacy. You can watch the demo video Here.

 

Quality

The dataset is collected using Artec Eva hand held structured light scanner. The scanner has 3D point accuracy up to 0.1 mm and 3D resolution of 0.5 mm, enabling capture of high frequency geometrical details, alongwith high resolution texture maps. The subjects were scanned in a studio environment with controlled lighting and uniform illumination.

vis

Download Sample

Please click here to download a sample from the full dataset

 

Request Full Dataset

To get access to the dataset, please fill and sign the agreement document and send it via email to manager.rnd[AT]iiit.ac.in and asharma[AT]iiit.ac.in with the subject line "Requesting access to 3DHumans (IIITH) dataset". Upon acceptance of your request, you will receive an expirable link with a password from which you can download the dataset. If you find our dataset useful, please cite our technical paper as given below.

 

Technical Paper

The 3DHumans dataset was first introduced in our technical paper: SHARP: Shape-Aware Reconstruction of People in Loose Clothing (IJCV, 2022)

 

Citation

If you use our dataset, kindly cite the corresponding technical paper as follows:

@article{Jinka2022,
		doi = {10.1007/s11263-022-01736-z},
		url = {https://doi.org/10.1007/s11263-022-01736-z},
		year = {2022},
		month = dec,
		publisher = {Springer Science and Business Media {LLC}},
		author = {Sai Sagar Jinka and Astitva Srivastava and Chandradeep Pokhariya and Avinash Sharma and P. J. Narayanan},
		title = {SHARP: Shape-Aware Reconstruction of People in Loose Clothing},
		journal = {International Journal of Computer Vision}
		}

Acknowledgements

Dataset collection was financially supported by a DST grant (DST/ICPS/ IHDS/2018) and partially facilitated with manpower support from IHub, IIIT Hyderabad

Classroom Slide Narration System


Jobin K.V., Ajoy Mondal, and C.V. Jawahar

IIIT Hyderabad       {jobin.kv@research., ajoy.mondal@, and jawahar@}iiit.ac.in

[ Code ]   | [ Demo Video ]   | [ Dataset ]

banner style3

The architecture of the proposed cssnet for classroom slide seg-mentation. The network consists of three modules — (i) attention module (up-per dotted region), (ii) multi-scale feature extraction module (lower region), (iii)feature concatenation module.

Abstract

Slide presentations are an effective and efficient tool used by the teaching community for classroom communication. However, this teaching model can be challenging for the blind and visually impaired (VI) students. The VI student required a personal human assistance for understand the presented slide. This shortcoming motivates us to design a Classroom Slide Narration System (CSNS) that generates audio descriptions corresponding to the slide content. This problem poses as an image-to-markup language generation task. The initial step is to extract logical regions such as title, text, equation, figure, and table from the slide image. In the classroom slide images, the logical regions are distributed based on the location of the image. To utilize the location of the logical regions for slide image segmentation, we propose the architecture, Classroom Slide Segmentation Network (CSSN). The unique attributes of this architecture differs from most other semantic segmentation networks. Publicly available benchmark datasets such as WiSe and SPaSe are used to validate the performance of our segmentation architecture. We obtained 9.54% segmentation accuracy improvement in WiSe dataset. We extract content (information) from the slide using four well-established modules such as optical character recognition (OCR), figure classification, equation description, and table structure recognizer. With this information, we build a Classroom Slide Narration System (CSNS) to help VI students understand the slide content. The users have given better feedback on the quality output of the proposed CSNS in comparison to existing systems like Facebook's Automatic Alt-Text (AAT) and Tesseract.

Paper

  • Paper
    Classroom Slide Narration System

    Jobin K.V., Ajoy Mondal, and Jawahar C.V.
    Classroom Slide Narration System , CVIP, 2021.
    [PDF ] | [BibTeX]

    Updated Soon

 

Contact

  1. Jobin K.V. - This email address is being protected from spambots. You need JavaScript enabled to view it.
  2. Ajoy Mondal - This email address is being protected from spambots. You need JavaScript enabled to view it.
  3. Jawahar C.V. - This email address is being protected from spambots. You need JavaScript enabled to view it.

Handwritten Text Retrieval from Unlabeled Collections


Santhoshini Gongidi and C.V. Jawahar

[ Paper ]   | [ Demo ]

Abstract

Handwritten documents from communities like cultural heritage, judiciary, and modern journals remain largely unexplored even today. To a great extent, this is due to the lack of retrieval tools for such unlabeled document collections. In this work, we consider such collections and present a simple, robust retrieval framework for easy information access. We achieve retrieval on unlabeled novel collections through invariant features learnt for handwritten text. These feature representations enable zero-shot retrieval for novel queries on unexplored collections. We improve the framework further by supporting search via text and exemplar queries. Four new collections written in English, Malayalam, and Bengali are used to evaluate our text retrieval framework. These collections comprise 2957 handwritten pages and over 300K words. We report promising results on these collections, despite the zero-shot constraint and huge collection size. Our framework allows the addition of new collections without any need for specific finetuning or labeling. Finally, we also present a demonstration of the retrieval framework.


Demo link: HW-Search


Teaser Video:


Related Publications

Santhoshini Gongidi, C V Jawahar, Handwritten Text Retrieval from Unlabeled Collections, CVIP 2021

Contact

For any queries about the work, please contact the authors below

  1. Santhoshini Gongidi: This email address is being protected from spambots. You need JavaScript enabled to view it.

Audio-Visual Speech Super-Resolution


Rudrabha Mukhopadhyay*, Sindhu B Hegde* , Vinay Namboodiri and C.V. Jawahar

IIIT Hyderabad       Univ. of Bath

BMVC, 2021 (Oral)

[ Code ]   | [ Demo Video ]

banner style3

We present an audio-visual model for super-resolving very low-resolution speech inputs (example, 1kHz) at large scale-factors. In contrast to the existing audio-only speech super-resolution approaches, our method benefits from the visual stream, either the real-visual stream (if available), or the generated visual stream from our pseudo-visual network.

Abstract

In this paper, we present an audio-visual model to perform speech super-resolution at large scale-factors (8x and 16x). Previous works attempted to solve this problem using only the audio modality as input and thus were limited to low scale-factors of 2x and 4x. In contrast, we propose to incorporate both visual and auditory signals to super-resolve speech of sampling rates as low as 1kHz. In such challenging situations, the visual features assist in learning the content and improves the quality of the generated speech. Further, we demonstrate the applicability of our approach to arbitrary speech signals where the visual stream is not accessible. Our "pseudo-visual network" precisely synthesizes the visual stream solely from the low-resolution speech input. Extensive experiments and the demo video illustrate our method's remarkable results and benefits over state-of-the-art audio-only speech super-resolution approaches.

Paper

  • Paper
    Audio-Visual Speech Super-Resolution

    Rudrabha Mukhopadhyay*, Sindhu B Hegde*, Vinay Namboodiri and C.V. Jawahar
    Audio-Visual Speech Super-Resolution, BMVC, 2021 (Oral).
    [PDF ] | [BibTeX]

    Updated Soon

Demo

--- COMING SOON ---


Contact

  1. Rudrabha Mukhopadhyay - This email address is being protected from spambots. You need JavaScript enabled to view it.
  2. Sindhu Hegde - This email address is being protected from spambots. You need JavaScript enabled to view it.

More Articles …

  1. MeronymNet: A Hierarchical Model for Unified and Controllable Multi-Category Object Generation
  2. Automated Tree Generation Using Grammar & Particle System
  3. Towards Boosting the Accuracy of Non-Latin Scene Text Recognition
  4. Transfer Learning for Scene Text Recognition in Indian Languages
  • Start
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. Projects
  5. CVIT Projects
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.