Topological Mapping for Manhattan-like Repetitive Environments


Abstract

We showcase a topological mapping frameworkfor a challenging indoor warehouse setting. At the most abstractlevel, the warehouse is represented as a Topological Graphwhere the nodes of the graph represent a particular warehousetopological construct (e.g. rackspace, corridor) and the edgesdenote the existence of a path between two neighbouringnodes or topologies. At the intermediate level, the map isrepresented as a Manhattan Graph where the nodes and edgesare characterized by Manhattan properties and as a Pose Graphat the lower-most level of detail. The topological constructsare learned via a Deep Convolutional Network while therelational properties between topological instances are learntvia a Siamese-style Neural Network. In the paper, we showthat maintaining abstractions such as Topological Graph andManhattan Graph help in recovering an accurate Pose Graphstarting from a highly erroneous and unoptimized Pose Graph.We show how this is achieved by embedding topological andManhattan relations as well as Manhattan Graph aided loopclosure relations as constraints in the backend Pose Graphoptimization framework. The recovery of near ground-truthPose Graph on real-world indoor warehouse scenes vindicatethe efficacy of the proposed framework.

 

Introduction

We showcase a topological mapping framework for a challenging indoor warehouse setting. At the most abstract level, the warehouse is represented as a Topological Graph where the nodes of the graph represent a particular warehouse topological construct (e.g. rackspace, corridor) and the edges denote the existence of a path between two neighbouring nodes or topologies. At the intermediate level, the map is represented as a Manhattan Graph where the nodes and edges are characterized by Manhattan properties and as a Pose Graph at the lower-most level of detail. The topological constructs are learned via a Deep Convolutional Network while the relational properties between topological instances are learnt via a Siamese-style Neural Network. In the paper, we show that maintaining abstractions such as Topological Graph and Manhattan Graph help in recovering an accurate Pose Graph starting from a highly erroneous and unoptimized Pose Graph. We show how this is achieved by embedding topological and Manhattan relations as well as Manhattan Graph aided loop closure relations as constraints in the backend Pose Graph optimization framework. The recovery of near ground-truth Pose Graph on real-world indoor warehouse scenes vindicate the efficacy of the proposed framework.
IMG

Qualitative Results:

1)RTABMAP SLAM
IMG
Fig. a shows registered map generated by RTABMAP Slam. Fig. b shows RTABMAP trajectory with topological labels. Fig. c compares the RTABMAP trajectory with groundtruth trajectory. Fig. d compares trajectory generated using our topological SLAM pipeline with groundtruth.
2)RTABMAP as Visual Odometry pipeline
IMG
Fig. a shows trajectory obtained using RTABMAP with loop closure turn off. Wheel odometry is used as odometry source. Fig. b compares RTABMAP trajectory with groundtruth. Fig. c compares trajectory obtained using Topological Slam pipeline with groundtruth.

Code:

Our pipeline consists of 3 parts - each sub-folder in this repo containts code for each:

  • Topological categorization using a convolutional neural network classifier -> Topological Classifier
  • Predicting loop closure constraints using Multi-Layer Perceptron -> Instance Comparator
  • Graph construction and pose graph optimization using obtained Manhattan and Loop Closure Constraints -> Pose Graph Optimizer

How to use each is explained in corresponding sub-folder
Please find GitHub Project Page


Bibtex

If you find our work useful in your research, please consider citing:

 @article{ puligilla2020topo, 
author = { Puligilla, Sai Shubodh and Tourani, Satyajit and Vaidya, Tushar and Singh Parihar, Udit and Sarvadevabhatla, Ravi Kiran and Krishna, Madhava }, 
title = { Topological Mapping for Manhattan-Like Repetitive Environments }, 
journal = { ICRA }, 
year = { 2020 }, 
}

A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild


Prajwal Renukanand*   Rudrabha Mukhopadhyay*   Vinay Namboodiri   C.V. Jawahar

IIIT Hyderabad       Univ. of Bath

[Code]   [Interactive Demo]   [Demo Video]   [ReSyncED]

word

We propose a novel approach that achieves significantly more accurate lip-synchronization (A) in dynamic, unconstrained talking face videos. In contrast, we can see that the corresponding lip shapes generated by the current best model (B) is out-of-sync with the spoken utterances (shown at the bottom).

Abstract

In this work, we investigate the problem of lip-syncing a talking face video of an arbitrary identity to match a target speech segment. Current works excel at producing accurate lip movements on a static image or on videos of specific people seen during the training phase. However, they fail to accurately morph the actual lip movements of arbitrary identities in dynamic, unconstrained talking face videos, resulting in significant parts of the video being out-of-sync with the newly chosen audio. We identify key reasons pertaining to this and hence resolve them by learning from a powerful lip-sync discriminator. Next, we propose new, rigorous evaluation benchmarks and metrics to specifically measure the accuracy of lip synchronization in unconstrained videos. Extensive quantitative and human evaluations on our challenging benchmarks show that the lip-sync accuracy of the videos generated using our Wav2Lip model is almost as good as real synced videos. We clearly demonstrate the substantial impact of our Wav2Lip model in our publicly available demo video. We also open-source our code, models, and evaluation benchmarks to promote future research efforts in this space.


Paper

  • Paper
    A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild

    Prajwal Renukanand*, Rudrabha Mukhopadhyay*, Vinay Namboodiri and C.V. Jawahar
    A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild, ACM Multimedia, 2020 .
    [PDF] |

    @misc{prajwal2020lip,
    title={A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild},
    author={K R Prajwal and Rudrabha Mukhopadhyay and Vinay Namboodiri and C V Jawahar},
    year={2020},
    eprint={2008.10010},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
    }

Live Demo

Please click here for the live demo : https://www.youtube.com/embed/0fXaDCZNOJc


Architecture

word

Architecture for generating speech from lip movements

Our approach generates accurate lip-sync by learning from an ``already well-trained lip-sync expert". Unlike previous works that employ only a reconstruction loss or train a discriminator in a GAN setup, we use a pre-trained discriminator that is already quite accurate at detecting lip-sync errors. We show that fine-tuning it further on the noisy generated faces hampers the discriminator's ability to measure lip-sync, thus also affecting the generated lip shapes.


Ethical Use

To ensure fair use, we strongly require that any result created using this our algorithm must unambiguously present itself as synthetic and that it is generated using the Wav2Lip model. In addition, to the strong positive applications of this work, our intention to completely open-source our work is that it can simultaneously also encourage efforts in detecting manipulated video content and their misuse. We believe that Wav2Lip can enable several positive applications and also encourage productive discussions and research efforts regarding fair use of synthetic content.


Contact

  1. Prajwal K R - This email address is being protected from spambots. You need JavaScript enabled to view it.
  2. Rudrabha Mukhopadhyay - This email address is being protected from spambots. You need JavaScript enabled to view it.

Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis


Prajwal Renukanand*   Rudrabha Mukhopadhyay*   Vinay Namboodiri   C.V. Jawahar

IIIT Hyderabad       IIT Kanpur

CVPR 2020

[Code]   [Data]

Please click here to redirect to watch our video in Youtube.

word

In this work, we propose a sequence-to-sequence architecture for accurate speech generation from silent lip videos in unconstrained settings for the first time. The text in the bubble is manually transcribed and is shown for presentation purposes.

Abstract

Humans involuntarily tend to infer parts of the conversation from lip movements when the speech is absent or corrupted by external noise. In this work, we explore the task of lip to speech synthesis, i.e., learning to generate natural speech given only the lip movements of a speaker. Acknowledging the importance of contextual and speaker-specific cues for accurate lip reading, we take a different path from existing works. We focus on learning accurate lip sequences to speech mappings for individual speakers in unconstrained, large vocabulary settings. To this end, we collect and release a large-scale benchmark dataset, the first of its kind, specifically to train and evaluate the single-speaker lip to speech task in natural settings. We propose an approach to achieve accurate, natural lip to speech synthesis in such unconstrained scenarios for the first time. Extensive evaluation using quantitative, qualitative metrics and human evaluation shows that our method is almost twice as intelligible as previous works in this space.


Paper

  • Paper
    Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis

    Prajwal Renukanand*, Rudrabha Mukhopadhyay*, Vinay Namboodiri and C.V. Jawahar
    Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis, CVPR, 2020 (Accepted).
    [PDF] |

    @InProceedings{Prajwal_2020_CVPR,
    author = {Prajwal, K R and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C.V.},
    title = {Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis},
    booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
    }

Live Demo

Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis (CVPR, 2020) : Please click here to redirect to Youtube.


Dataset

word

Our dataset contains lectures and chess commentary as of now.

We introduce a new benchmark dataset for unconstrained lip to speech synthesis that is tailored towards exploring the following line of thought: How accurately can we infer an individual’s speech style and content from his/her lip movements? To create this dataset, we collect a total of about 175 hours of talking face videos across 6 speakers. Our dataset is far more unconstrained and natural than older datasets like the GRID corpus and TIMIT dataset. All the corpuses are compared in the table given below.

word

Comparison of our dataset with other datasets which has been used earlier for video-to-speech generation

To access the dataset please click this link or the link given near the top of the page. We release the youtube ids of the videos used. In case, the videos are not present in Youtube, please contact us for an alternate link.

Architecture

word

Architecture for generating speech from lip movements

Our network consists of a spatio-temporal encoder and a attention based decoder. The spatio-temporal encoder takes multiple T frames as input and passes through a 3D CNN based encoder. We feed the output from 3D CNN based encoder to a attention based speech decoder to generate melspectrograms following the seq-to-seq paradigm. For more information about our model and different design choices we make please go through our paper.

Contact

  1. Prajwal K R - This email address is being protected from spambots. You need JavaScript enabled to view it.
  2. Rudrabha Mukhopadhyay - This email address is being protected from spambots. You need JavaScript enabled to view it.

Fused Text Recogniser and Deep Embeddings Improve Word Recognition and Retrieval


Siddhant Bansal   Praveen Krishnan   C.V.Jawahar  

DAS 2020

word

word

Recognition and retrieval of textual content from the large document collections have been a powerful use case for the document image analysis community. Often the word is the basic unit for recognition as well as retrieval. Systems that rely only on the text recogniser’s (OCR) output are not robust enough in many situations, especially when the word recognition rates are poor, as in the case of historic documents or digital libraries. An alternative has been word spotting based methods that retrieve/match words based on a holistic representation of the word. In this paper, we fuse the noisy output of text recogniser with a deep embeddings representation derived out of the entire word. We use average and max fusion for improving the ranked results in the case of retrieval. We validate our methods on a collection of Hindi documents. We improve word recognition rate by 1.4% and retrieval by 11.13% in the mAP.


Paper

  • Paper
    Fused Text Recogniser and Deep Embeddings Improve Word Recognition and Retrieval

    Siddhant Bansal, Praveen Krishnan and C.V. Jawahar
    DAS, 2020



    [Paper]       [Code]       [Demo]      


    Word Recognition in a nutshell

    word

    Word Retrieval in a nutshell

    word

    Results

    Word Recognition

    word

    Word Retrieval

    word

IndicSpeech: Text-to-Speech Corpus for Indian Languages

 

  [Dataset]word

Word clouds of the collected corpus for 3 languages

Abstract

India is a country where several tens of languages are spoken by over a billion strong population. Text-to-speech systems for such languages will thus be extremely beneficial for wide-spread content creation and accessibility. Despite this, the current TTS systems for even the most popular Indian languages fall short of the contemporary state-of-the-art systems for English, Chinese, etc. We believe that one of the major reasons for this is the lack of large, publicly available text-to-speech corpora in these languages that are suitable for training neural text-to-speech systems. To mitigate this, we release a large text-to-speech corpus for $3$ major Indian languages namely Hindi, Malayalam and Bengali. In this work, we also train a state-of-the-art TTS system for each of these languages and report their performances.


Paper

  • IndicSpeech: Text-to-Speech Corpus for Indian Languages

    Nimisha Srivastava, Rudrabha Mukhopadhyay*, Prajwal K R*, C.V. Jawahar
    IndicSpeech: Text-to-Speech Corpus for Indian Languages, LREC, 2020 
    [PDF] |

    @inproceedings{srivastava-etal-2020-indicspeech,
    title = "{I}ndic{S}peech: Text-to-Speech Corpus for {I}ndian Languages",
    author = "Srivastava, Nimisha and
    Mukhopadhyay, Rudrabha and
    K R, Prajwal and
    Jawahar, C V",
    booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
    month = may,
    year = "2020",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://www.aclweb.org/anthology/2020.lrec-1.789",
    pages = "6417--6422",
    abstract = "India is a country where several tens of languages are spoken by over a billion strong population. Text-to-speech systems for such languages will thus be extremely beneficial for wide-spread content creation and accessibility. Despite this, the current TTS systems for even the most popular Indian languages fall short of the contemporary state-of-the-art systems for English, Chinese, etc. We believe that one of the major reasons for this is the lack of large, publicly available text-to-speech corpora in these languages that are suitable for training neural text-to-speech systems. To mitigate this, we release a 24 hour text-to-speech corpus for 3 major Indian languages namely Hindi, Malayalam and Bengali. In this work, we also train a state-of-the-art TTS system for each of these languages and report their performances. The collected corpus, code, and trained models are made publicly available.",
    language = "English",
    ISBN = "979-10-95546-34-4",
    }

Live Demo

Please click here for demo video : https://bhaasha.iiit.ac.in/indic-tts/


Contact

  1. Prajwal K R - This email address is being protected from spambots. You need JavaScript enabled to view it.
  2. Rudrabha Mukhopadhyay - This email address is being protected from spambots. You need JavaScript enabled to view it.