CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Optimization for and by Machine Learning


Pritish Mohapatra

Abstract

In machine learning, tasks like making predictions using a model and learning model parameters can often be formulated as optimization problems. The feasibility of using a machine learning model de- pends on the efficiency with which the corresponding optimization problems can be solved. As such, the area of machine learning throws up many challenges and interesting problems for research in the field of optimization. While in some cases, it is possible to directly apply off-the-shelf optimization methods for problems in machine learning, in many other cases, it becomes necessary to develop optimization algo- rithms that are tailor-made for specific problems. On the other hand, developing optimization algorithms for specific problem domains can itself be helped by machine learning techniques. Learning optimiza- tion algorithms from data can help relieve tedious effort required to develop optimization methods for new problem domains. The challenge here is to appropriately parameterize the space of algorithms for different optimization problems. In this context, we explore the interplay between the areas of optimiza- tion and machine learning and make contributions in specific problems of interest that lie in the overlap of these fields.

 

Year of completion:  December 2021
 Advisor : C. V. Jawahar

Related Publications

  • Pritish Mohapatra, C. V. Jawahar and M. Pawan Kumar -  Learning to Round for Discrete Labeling Problems, Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2018, 09 - 11 April 2018, Playa Blanca, Lanzarote.[PDF]

  • Pritish Mohapatra, Michal Rolı́nek, C. V. Jawahar, Vladimir Kolmogorov and M. Pawan Kumar -  Efficient Optimization for Rank-based Loss Functions, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, 18 - 22 June 2018,Salt Lake City, Utah.[PDF]

  • Pritish Mohapatra, Puneet Kumar Dokania, C.V Jawahar and M. Pawan Kumar - Partial Linearization based Optimization for Multi-class SVM, Proceedings of European Conference on Computer Vision, (ECCV) – Amsterdam, The Netherlands, 2016. [PDF]

  • Aseem Behl, Pritish Mohapatra, C. V. Jawahar, M. Pawan Kumar - Optimizing Average Precision using Weakly Supervised Data IEEE Transations on Pattern Analysis and Machine Intelligence (TPAMI 2015). [PDF]

  • Mohak Sukhwani, Suriya Singh, Anirudh Goyal, Aseem Behl, Pritish Mohapatra, Brijendra Kumar Bharti, C.V. Jawahar - Monocular Vision based Road Marking Recognition for Driver Assistance and Safety Proceedings of the IEEE Conference on Vehicular Electronics and Safety,16-17 Dec 2014, Hyderabad, India. [PDF]

  • Pritish Mohapatra, C.V. Jawahar and M. Pawan Kumar - Efficient Optimization for Average Precision SVM Proceedings of the Neural Information Processing Systems Foundation,08-13 Dec 2014, Qubec, Canada. [PDF]


Downloads

thesis

 ppt

Learning Representations for Word Images


Praveen Krishnan

Abstract

Reading and writing documents is one among the primary skills with which we gather and communicate information. With the emergence of artificial intelligence (AI), researchers are in constant pursuit to build intelligent algorithms that can bring our physical and digital worlds close to each other. One such important domain is document image analysis, where we delve into the problem of understanding content from scanned document image collections. Considering “words” as the basic unit in understanding a document, in this thesis, we address the problem of finding the best possible representation for word images. Representation learning has been a key investigation for an AI problem. The primary goal of this thesis is to learn efficient representations for word images that encode its content. An ideal representation should be invariant to multiple fonts, handwritten styles and less sensitive to noise and distortions. In the past, representations have been handcrafted, specific to modalities (printed, handwritten), and sensitive to the complexities in handwriting in multi-writer scenarios. In this work, we choose the paradigm of learning from data using deep neural networks. We take our inspiration from the fact that given large amounts of annotated data, modern deep neural networks can inherently learn better representations. In this thesis, we also relax the need for large annotated datasets by heavily capitalizing on synthetically generated images. We also introduce a novel problem of learning semantic representation for word images which encodes the semantics of the word and reduces the vocabulary gap that exists between the query and the retrieved results. The first contribution of this thesis is a simple technique to generate large amounts of synthetic data, useful for pre-training deep neural networks. This led to the creation of IIIT-HWS dataset which is now widely used in the document community. The other major contributions of this thesis are: (a) the design of a deep convolutional architecture (named as HWNet) for learning an efficient holistic representation for word images, (b) a joint embedding scheme to project words and textual strings onto a common subspace, and (c) a novel form of word image representation which respects the word form along with its semantic meaning. The learned representations are evaluated under the tasks of word spotting and word recognition. We report state-of-the-art performance on popular datasets under both modern/historical and handwritten/printed document images while keeping the representation size compact in nature. Finally, in order to validate the proposed representations of this thesis, we present some interesting use cases such as (i) finding similarity between a pair of handwritten documents images, (ii) searching for keywords from online lecture videos, and (iii) building word retrieval system for Indic scripts.

 

Year of completion:  November 2020
 Advisor : C V Jawahar

Related Publications

  • Siddhant Bansal, Praveen Krishnan and C.V. Jawahar Improving Word Recognition using Multiple Hypotheses and Deep Embeddings The  25th International Conference of Pattern Recognition  (ICPR) (ICPR 2021), Milano  [PDF]

  • Kartik Dutta, Praveen Krishnan, Minesh Mathew and C.V. Jawahar - Improving CNN-RNN Hybrid Networks for Handwriting Recognition The 16th International Conference on Frontiers in Handwriting Recognition,Niagara Falls, USA [PDF]

  • Kartik Dutta, Praveen Krishnan, Minesh Mathew and C.V. Jawahar - Towards Spotting and Recognition of Handwritten Words in Indic Scripts The 16th International Conference on Frontiers in Handwriting Recognition,Niagara Falls, USA [PDF]

  • Kartik Dutta, Praveen Krishnan, Minesh Mathew and C.V. Jawahar - Localizing and Recognizing Text in Lecture Videos The 16th International Conference on Frontiers in Handwriting Recognition,Niagara Falls, USA [PDF]

  • Vijay Rowtula, Praveen Krishnan, C.V. Jawahar - POS Tagging and Named Entity Recognition on Handwritten Documents, ICON, 2018[PDF]

  • Praveen Krishnan, Kartik Dutta and C. V. Jawahar - Word Spotting and Recognition using Deep Embedding, Proceedings of the 13th IAPR International Workshop on Document Analysis Systems, 24-27 April 2018, Vienna, Austria. [PDF]

  • Kartik Dutta,Praveen Krishnan, Minesh Mathew and C. V. Jawahar - Offline Handwriting Recognition on Devanagari using a new Benchmark Dataset, Proceedings of the 13th IAPR International Workshop on Document Analysis Systems, 24-27 April 2018, Vienna, Austria. [PDF]

  • Kartik Dutta, Praveen Krishnan, Minesh Mathew, and C. V. Jawahar -  Towards Accurate Handwritten Word Recognition for Hindi and Bangla National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), 2017 [PDF]

  • Praveen Krishnan and C.V Jawahar - Matching Handwritten Document Images, The 14th European Conference on Computer Vision (ECCV) – Amsterdam, The Netherlands, 2016. [PDF]

  • Praveen Krishnan,  Kartik Dutta and C.V Jawahar - Deep Feature Embedding for Accurate Recognition and Retrieval of Handwritten Text, 15th International Conference on Frontiers in Handwriting Recognition, Shenzhen, China (ICFHR), 2016. [PDF]

  • Anshuman Majumdar, Praveen Krishnan and C.V. Jawahar - Visual Aesthetic Analysis for Handwritten Document Images,15th International Conference on Frontiers in Handwriting Recognition, Shenzhen, China (ICFHR), 2016. [PDF]

  • Praveen Krishnan, Naveen Sankaran, Ajeet Kumar Singh and C. V. Jawahar - Towards a Robust OCR System for Indic Scripts Proceedings of the 11th IAPR International Workshop on Document Analysis Systems, 7-10 April 2014, Tours-Loire Valley, France. [PDF]

  • Praveen Krishnan and C V Jawahar - Bringing Semantics in Word Image Retrieval Proceedings of the 12th International Conference on Document Analysis and Recognition, 25-28 Aug. 2013, Washington DC, USA. [PDF]

  • Praveen Krishnan, Ravi Sekhar, C V Jawahar - Content Level Access to Digital Library of India Pages Proceedings of the 8th Indian Conference on Vision, Graphics and Image Processing, 16-19 Dec. 2012, Bombay, India. [PDF]


Downloads

thesis

Geometry-aware methods for efficient and accurate 3D reconstruction


Rajvi Shah

Abstract

Advancements in 3D sensing and reconstruction has made a huge leap for modeling large-scale environments from monocular images using structure from motion (SfM) and simultaneous localization and mapping (SLAM) algorithms. SfM and SLAM based 3D reconstruction has applications for digital archival and modeling of real-world objects and environments, visual localization for geo-tagging and information retrieval, and mapping and navigation for robotic and autonomous driving applications. In this thesis, we address problems in the area of large-scale structure from motion (SfM) for 3D reconstruction and localization. We introduce new methods for improving efficiency and accuracy of state-of-the-art pipeline for structure from motion. Large-scale SfM pipeline deals with large unorganized collections of images pertaining to a particular geographical site. These image collections are formed by either retrieving relevant images using textual queries from the Internet, or can be captured for the specific purpose of 3D modeling, mapping, and navigation. Internet image collections tend to be more noisy and present more challenges for reconstruction as compared to datasets captured with specific intention to reconstruct. In this thesis, we propose methods that help with organizing these large, unstructured, and noisy images into a structure that is useful for SfM methods, a match-graph (or a view-graph). We first propose a geometry-aware two stage approach for pairwise image matching that is both more efficient and superior in quality of correspondences. We then extend this idea to SfM pipeline and present an iterative multistage framework for coarse to fine 3D reconstruction. Finally, we suggest that a key to solving many of the reconstruction problems is to address the problem of filtering and improving the view-graph in a way that is specific to the underlying problem. To this effect, we propose a unified framework for view-graph selection and show its application to achieve multiple reconstruction objectives.

 

Year of completion:  December 2020
 Advisor : P J Narayanan

Related Publications

  • Rajvi Shah, Visesh Chari and P.J. Narayanan - View-graph Selection Framework for SfM European Conference on Computer Vision (ECCV), 2018, Munich, Germany [PDF]

  • Saumya Rawat, Siddhartha Gairola, Rajvi Shah and P.J. Narayanan - Find Me a Sky: A Data-Driven Method for Color-Consistent Sky Search and Replacement International Conference on Multimedia Modeling 2018: 216-228 [PDF]

  • Ishit Mehta, Parikshit Sakurikar, Rajvi Shah, P J Narayanan -  SynCam: Capturing sub-frame synchronous media using smartphones IEEE International Conference on Multimedia and Expo (ICME-2017 ).[PDF]

  • Aditya Singh, Saurabh Saini, Rajvi Shah and P. J. Narayanan - Learning to hash-tag videos with Tag2Vec Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing. ACM, 2016. [PDF]

  • Aditya Singh, Saurabh Saini, Rajvi Shah and P J Narayanan - From Traditional to Modern : Domain Adaptation for Action Classification in Short Social Video Clips 38th German Conference on Pattern Recognition (GCPR 2016) Hannover, Germany, September 12-15 2016. [PDF]

  • Rajvi Shah, Vanshika Srivastava, P.J. Narayanan - Geometry-aware Feature Matching for Structure from Motion Applications Proceedings of the IEEE Winter Conferenc on Applications of Computer Vision, 06-09 Jan 2015, Waikoloa Beach, USA. [PDF]

  • Rajvi Shah, Aditya Deshpande, P.J. Narayanan - Multistage SFM: Revisiting Incremental Structure from Motion Proceedings of the International Conference on 3D Vision,08-11 Dec 2014, Tokyo, Japan.[PDF]

  • Rajvi Shah and P. J. Narayanan - Interactive video manipulation using object trajectories and scene backgrounds IEEE Transactions on Circuits and Systems for Video Technology 23.9 (2013): 1565-1576. [PDF]

  • Rajvi Shah and P.J. Narayanan - Trajectory based Video Object Manipulation Proceedings of IEEE International Conference on Multimedia and Expo (ICME 2011),11-15 July, 2011, Barcelona, Spain. [PDF]

  • Rajvi Shah, P. J. Narayanan and Kishore Kothapalli - GPU-Accelerated Genetic Algorithms Proceedings of The 3rd Workshop on Parallel Architectures for Bio-inspired Algorithms(WPABA) in conjunction with Parallel Architectures for Compilation Techniques (PACT'10),11-15 Sep. 2010,Vienna, Austria. [PDF]


Downloads

thesis

Anatomical Structure Segmentation in Retinal Images with Some Applications in Disease Detection


Arunava Chakravarty

Abstract

Color Fundus (CF) imaging and Optical Coherence Tomography (OCT) are widely used by ophthalmologists to visualize the retinal surface and the intra-retinal tissue layers respectively. An accurate segmentation of the anatomical structures in these images is necessary to visualize and quantify the structural deformations that characterize retinal diseases such as Glaucoma, Diabetic Macular Edema (DME) and Age-related Macular Degeneration (AMD). In this thesis, we propose different frameworks for the automatic extraction of the boundaries of relevant anatomical structures in CF and OCT images. First, we address the problem of the segmentation of Optic Disc (OD) and Optic Cup (OC) in CF images to aid in the detection of Glaucoma. We propose a novel boundary-based Conditional Random Field (CRF) framework to jointly extract both the OD and OC boundaries in a single optimization step. Although OC is characterized by the relative drop in depth from the OD boundary, the 2D CF images lack explicit depth information. The proposed method estimates depth from CF images in a supervised manner using a coupled, sparse dictionary trained on a set of image-depth map (derived from OCT) pairs. Since our method requires a single CF image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Next, we consider the task of the intra-retinal tissue layer segmentation in cross-sectional OCT images which is essential to quantify the morphological changes in specific tissue layers caused by AMD and DME. We propose a supervised CRF framework to jointly extract the eight layer boundaries in a single optimization step. In contrast to the existing energy mini-mization based segmentation methods that employ handcrafted energy cost terms, we linearly parameterize the total CRF energy to allow the appearance features for each layer and the relative weights of the shape priors to be learned in a joint, end-to-end manner by employing the Structural Support Vector Machine formulation. The proposed method can aid the oph-thalmologists in the quantitative analysis of structural changes in the retinal tissue layers for clinical practice and large-scale clinical studies. Next, we explore the Level Set based Deformable Models (LDM) which is a popular energy minimization framework for medical image segmentation. We model the LDM as a novel Recurrent Neural Network (RNN) architecture called the Recurrent Active Contour Evolution Network (RACE-net). In contrast to the existing LDMs, RACE-net allows the curve evolution velocities to be learned in an end-to-end manner while minimizing the number of network parameters, computation time and memory requirements. Consistent performance of RACE-net on a diverse set of segmentation tasks such as the extraction of OD and OC in CF images, cell nuclei in histopathological images and left atrium in cardiac MRI volumes demonstrates its utility as a generic, off-the-shelf architecture for biomedical segmentation. Segmentation has many clinical applications especially in the area of computer aided diagnostics. We close this dissertation with some illustrative applications of the segmentation information. We consider the case of disease detection in CF and OCT images. We explore and benchmark two classification strategies for the detection of glaucoma from CF images based on deep learning and handcrafted features respectively. Both the methods use a combination of appearance features directly derived from the CF image and structural features derived from the OD and OC segmentation. We also construct a Normative Atlas for the macular OCT volumes to aid in the detection of AMD. The irregularities in the Bruch’s membrane caused by the deposit of drusen are modeled as deviations from the normal anatomy represented by the Atlas Mean Template

 

Year of completion:  Novenber 2019
 Advisor : Jayanthi Sivaswamy

Related Publications

  • Chakravraty A, Gaddipati DJ and Jayanthi Sivaswamy - Construction of a Retinal Atlas for Macular OCT Volumes ICIAR 2018, Portugal [PDF]

  • Chakravraty A and Jayanthi Sivaswamy - RACE-net: A Recurrent Neural Network for Biomedical Image Segmentation IEEE journal of biomedical and health informatics [PDF]

  • Arunava Chakravarty and Jayanthi Sivaswamy - Joint optic disc and cup boundary extraction from monocular fundus images Computer Methods and Programs in Biomedicine 147 (2017): 51-61. [PDF]

  • Chakravarty and Jayanthi Sivaswamy - End-to-End Learning of a Conditional Random Field for Intra-retinal Layer Segmentation in Optical Coherence Tomography Annual Conference on Medical Image Understanding and Analysis. Springer, Cham, 2017. [PDF]

  • Chakrabarty L, Joshi G.D, Chakravarty A, Raman G.V., Krishnadas S.R. and Sivaswamy J. (2016) - Automated Detection of Glaucoma From Topographic Features of the Optic Nerve Head in Color Fundus Photographs, Journal of Glaucoma, 25(7, pp.590-597. [PDF]

  • Arunava Chakravarty and Jayanthi Sivaswamy - Glaucoma Classification with a Fusion of Segmentation and Image-based Features Proc. of IEEE International Symposium on Bio-Medical Imaging(ISBI), 2016, 13 - 16 April, 2016, Prague. [PDF]

  • Ujjwal, Arunava Chakravarty, Jayanthi Sivaswamy - An Assistive Annotation System for Retinal Images Proceedings of the IEEE International Symposium on Biomedical Imaging : From Nano to Macro, 16-19 April 2015.[PDF]

  • Jayanthi Sivaswamy, S.R. Krishnadas, Arunava Chakravarthy, Gopal Datt Joshi, Ujjwal, Tabish Abbas Syed - A Comprehensive Retinal Image Dataset for the Assessment of Glaucoma from the Optic Nerve Head Analysis JSM Biomed Imaging Data Papers 2(1) : 1004 (BIDP: 2015). [PDF]

  • Arunava Chakravarthy, Jayanthi Sivaswamy - Coupled Sparse Dictionary for Depth-based Cup Segmentation from Single Color Fundus Image Proceedings of the MICCAI 2014, 14-18 Sep 2014, Boston,USA. [PDF]

  • M J J P Van Grinsven, Arunava Chakravarty, Jayathi Sivaswamy, T. Theelen, B. Van Ginneken, C I Sanchez - A Bag of Words Approach for Discriminating between Retinal Images Containing Exudates or Drusen Proceedings of the IEEE 10th International Symposium on Biomedical Imaging : From Nano to Macro, 07-11 April. 2013, San Franciso,CA,USA. [PDF]

  • Ujjwal, K. Sai Deepak, Arunava Chakravarty, Jayathi Sivaswamy - Visual Saliency based Bright Lesion Detection and Discrimination in Retinal Images Proceedings of the IEEE 10th International Symposium on Biomedical Imaging : From Nano to Macro, 07-11 April. 2013, San Franciso,CA,USA. [PDF]

  • Arunava Chakravarty, Jayanthi Sivaswamy - A Novel Approach for Quantification of Retinal Vessel Tortuosity using Quadratic Polynomial Decomposition Proceedings of the indian Conference on Medical Informatics and Telemedicine, 28-30 Mar. 2013, Kharagpur, INDIA.. [PDF]


Downloads

thesis

Recognizing People in Image and Videos


Vijay Kumar

Abstract

Cameras and mobile phones have become integral part of our everyday lives as they become portable, powerful and cheaper. We capture and share hundreds of pictures and videos with our friends, family and social connections. Similarly, large volume of such visual content is generated in surveillance, entertainment, and biometrics applications. Without any doubt, people are the most important objects that dominate in these visual content. For instance, photos taken in a family event or movie videos focus around humans. It is utmost important to automatically detect, identify and analyze people appearing in images to obtain a better understanding of these content and make decisions around them. In this thesis, we consider the problem of person detection and recognition in images. This is a well explored topic in vision community with a vast literature focused on these problems. The current state-of-the-art recognition systems are able to identify people with high degree of accuracy in scenarios where images have high resolution, contain visible and near frontal faces, and recognition systems have access to sufficiently large training gallery. However, these systems need significant improvement in challenging real-world applications such as surveillance or entertainment videos where one needs to handle several practical issues such as non-visibility of faces, limitation of training samples, domain mismatch, etc in addition to other instance variations such as pose, illumination, and resolution. While there are plenty of challenges pertaining to person recognition, we are interested in some of the open challenges that are relevant from the deployment perspective in diverse recognition scenarios. We first consider people detection that is a pre-requisite for a recognition system. We detect people in images by detecting their faces through an exemplar based detector. Exemplar approach detects faces through hough voting using an exemplar training database indexed with bag-of-words method. We propose two key ideas referred as “Visual phrases” and “Contextual weighting” into the exemplar approach that improves its performance significantly. We show that visual phrases which encode dependencies between visual features are discriminative and propose a strategy to incorporate them into exemplar voting. We also introduce the notion of spatial consistency for a visual feature which weights each occurrence of a feature based on its global context. Our evaluation of popular in-the-wild face detection benchmarks demonstrate significant improvement obtained using these proposed ideas. We then focus on person recognition and consider several issues encountered in practical recognition systems. We initially address the common and important issue of “unavailability of sufficient training samples” during recognition. We propose a solution based on semi-supervised learning that can efficiently learn from a small amount of labeled data and a large amount of unlabeled data. We demonstrate how the similarities between labeled and unlabeled samples can be effectively exploited to improve the performance. We then consider the problem of “domain mismatch” between training gallery (source) and probe instances (target). We consider a recognition setup in which the objective is to identify people in a collection of probe images using a training gallery collected from different domain. We propose a novel two-stage solution that generates the labels of a few confident seed images from the target domain and propagate their labels to remaining images using a graph based framework. We evaluate our approach in several practical recognition scenarios such as movie videos and photo-albums. We then consider a different recognition scenario in which faces are not completely visible due to occlusion or people facing away from the camera. To deal with such occluded and partially or completely non-visible faces, we exploit information from other body regions such as head, upper body and body to improve the recognition. When considering different body regions, pose of different body regions pose a serious challenge. To handle the issues of unreliable facial region and pose variation, we propose a technique that learns multiple pose-specific representations from different body regions. Our approach involves training a separate deep convolutional network for each pose and then combining their predictions using adaptive weights determined by the pose of the person. Person recognition approaches based on multiple body regions however require training multiple deep convolution networks for different body regions resulting in large number of parameters with slower training and testing procedures. To overcome these, we develop an end-to-end person recognition approach based on pooling and aggregation of discriminative features from multiple body regions. Our end-to-end convolutional network pools features from several pre-determined region of interests and adaptively aggregates them using an attention mechanism to produce a compact representation. We evaluate our single end-to-end trained model on multiple person recognition benchmarks and show its effectiveness over multiple models trained on different body regions. We finally note that all of our work is developed with a keen focus on their applicability in real world applications. We have created and publicly released datasets and source code during the process.

 

Year of completion:  July 2019
 Advisor : Anoop M Namboodiri and C.V. Jawahar

Related Publications

  • Vijay Kumar, Anoop Namboodiri and C.V. Jawahar - Semi-supervised annotation of faces in image collection. Signal, Image and Video Processing (2017). [PDF]

  • Vijay Kumar, Anoop Namboodiri, Manohar Paluri and C. V. Jawahar - Pose-Aware Person Recognition Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [PDF]

  • Vijay Kumar, R. Raghavendra, Anoop Namboodiri and Christoph Busch - Robust transgender face recognition: Approach based on appearance and therapy factors Identity, Security and Behavior Analysis (ISBA), 2016 IEEE International Conference on. IEEE, 2016. [PDF]

  • Vijay Kumar R., Anoop M. Namboodiri, C.V. Jawahar - Visual Phrases for Exemplar Face Detection Proceedings of the International Conference on Computer Vision (ICCV 2015), 13-16 Dec 2015, Santiago, Chile. [PDF]

  • Hiba Ahsan, Vijay Kumar, C. V. Jawahar - Multi-Label Annotation of Music Proceedings of the Eighth International Conference on Advances in Pattern Recognition,04-07 Jan 2015, Kolkata, India. [PDF]

  • Vijay Kumar, Anoop M. Namboodiri and C.V. Jawahar - Face Recognition in Videos by Label Propogation Proceedings of the 22nd International Conference on Pattern Recognition, 24-28 Aug 2014, Stockholm, Sweden. [PDF]

  • Vijay Kumar, Harit Pandya and C.V. Jawahar - Identifying Ragas in India Music Proceedings of the 22nd International Conference on Pattern Recognition, 24-28 Aug 2014, Stockholm, Sweden.[PDF]

  • Shankar Setty, Moula Husain, Parisa Beham, Jyothi Gudavalli, Menaka Kandasamy, Radehsyam Vaddi, Vidyagouri Hemadri, JC Karure, Raja Raju, B.Rajan, Vijay Kumar and C V Jawahar - Indian Movie Face Database: A Benchmark for Face Recognition Under Wide Variations Proceedings of the IEEE National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, 18-21 Dec. 2013, Jodhpur, India. [PDF]

  • Vijay Kumar, Anoop M Namboodiri and C V Jawahar - Sparse Representation based Face Recognition with Limited Labeled Samples Proceedings of the 2nd Asian Conference Pattern Recognition, 05-08 Nov. 2013, Okinawa, Japan. [PDF]

  • Vijay Kumar, Amit Bansal, Goutam Hari Tulsiyan, Anand Mishra, Anoop M. Namboodiri, C V Jawahar - Sparse Document Image Coding for Restoration Proceedings of the 12th International Conference on Document Analysis and Recognition, 25-28 Aug. 2013, Washington DC, USA. [PDF]


Downloads

thesis

More Articles …

  1. Human Pose Retrieval for Image and Video collections
  2. Understanding Semantic Association Between Images and Text
  3. Understanding Text in Scene Images
  4. Aligning Textual & Visual Data : Towards Scalable Multimedia Retrieval
  • Start
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. Thesis
  5. Doctoral Dissertations
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.