CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Distinctive Parts for Relative attributes


Naga Sandeep Ramachandruni (homepage)

Abstract

Visual Attributes are properties observable in images that have human-designated names ( e.g., smiling, natural) and they are valuable as a new semantic cue in various vision problems like facial verification, object recognition, generating description of unfamiliar objects and to facilitate zero shot transfer learning etc. While most of the work on attributes focuses on binary attributes (indicating the presence or absence of attribute) the notion of relative attributes as introduced by Parikh and Grauman in ICCV 2011 provides an appealing way of comparing two images based on their visual properties than the binary attributes. Relative visual properties are a semantically rich way by which humans describe and compare objects in the world. They are necessary, for instance, to refine an identifying description (the rounder pillow; the same except bluer), or to situate with respect to reference objects (brighter than a candle; dimmer than a flashlight).  Furthermore, they have potential to enhance active and interactive learning, for instance, offering a better guide for a visual search (find me similar shoes, but shinier or refine the retrieved images of downtown Chicago to those taken on sunnier days). For learning relative attributes a ranking svm based formulation was proposed that uses globally represented pairs of annotated images. In this thesis, we extend this idea towards learning relative attributes using local parts that are shared across categories.

First we propose a part based representation that jointly represents a pair of images. For facial attributes, part corresponds to a block around a landmark point detected using a domain specific method. This representation explicitly encodes correspondences among parts, thus better capturing minute differences in parts that make an attribute more prominent in one image than another as compared to global representation. Next we update this part based representation by additionally learning weights corresponding to each part that denote their contribution towards predicting the strength of a given attribute.

We call these weights as significance coefficients of parts. For each attribute the significance coefficients are learned in a discriminative manner simultaneously with a max-margin ranking model. Thus the best parts for predicting relative attribute more smiling will be different from those from predicting more eyes open. We compare the baseline method of Parikh and Grauman with the proposed method under various settings. We have collected a new dataset of 10000 pair wise attribute level annotations using images from labeled faces in the wild ( LFW) dataset particularly focusing on large variety of samples in terms of poses, lightning conditions etc and completely ignoring the category information while collecting attribute annotation . Extensive experiments demonstrate that the new method significantly improves prediction accuracy as compared to the baseline method. Moreover the learned parts also compare favorably with human selected parts, thus indicating the intrinsic capacity of the proposed framework for learning attribute specific semantic parts. Additionally we illustrate the advantage of the
proposed method with interactive image search using relative attribute based feedback.

In this work, we also propose relational attributes, which provide a more natural way of comparing two images based on some given attribute than relative attributes. Relational attributes consider not only the content of a given pair of images, but also take into account its relationship with other pairs, thus making the comparison more robust.

 

Year of completion:  July 2016
 Advisor : Prof. C.V. Jawahar

Related Publications

  • Ramachandruni N. Sandeep, Yashaswi Verma and C.V. Jawahar - Relative Parts : Distinctive Parts of Learning Relative Attributes Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 23-28 June 2014, Columbus, Ohio, USA. [PDF]


Downloads

thesis

 ppt

Tomographic Image Reconstruction in Noisy and Limited Data Settings.


Syed Tabish Abbas (homepage)

Abstract

Reconstruction of images from projections lays the foundations for computed tomography (CT). Tomographic image reconstruction, due to its numerous real world applications, from medical scanners in radiology and nuclear medicine to industrial scanning and seismic equipment, is an extensively studied problem. The study of reconstructing function from its projections/line integrals, is around a century old.  The classical tomographic reconstruction problem was originally solved 1917  by J. Radon, proposing and inversion method now known as filtered backprojection (FBP). It was later shown that infinitely many projections are required to reconstruct an image perfectly. It is understood that incomplete data would leads to artifacts in the reconstructed images. In addition to the artifact problem, arising due to limited data availability, the reconstructed images are known to be corrupted by noise. We study these two problems of noisy and incomplete data in the follwoing two setups. Nuclear imaging modalities like Positron emission tomography (PET) are characterized by a low SNR value due to the underlying signal generation mechanism. Given the significant role images play in current-day diagnostics, obtaining noise-free PET images is of great interest. With its higher packing density and larger and symmetrical neighbourhood, the hexagonal lattice offers a natural robustness to degradation in signal. Based on this observation, we propose an alternate solution to denoising, namely by changing the sampling lattice.

We use filtered back projection for reconstruction, followed by a sparse dictionary based denoising and compare noise-free reconstruction on the Square and Hexagonal lattices.   Experiments with PET phantoms (NEMA, Hoffman) and the Shepp-Logan phantom show that the improvement in denoising, post reconstruction, is not only at the qualitative but also quantitative level. The improvement in PSNR in the hexagonal lattice is on an average between 2 to 10 dB. These results establish the potential of the hexagonal lattice for reconstruction from noisy data, in general.

In the limited data scenario we consider the Circular arc Radon Transform (CAR). Circular arc Radon transforms associate to a function, its integrals along arcs of circles. The transforms involve the integrals of a  function $f$ on the plane along a family of circular arcs. These transforms  arise naturally in the study of several medical imaging modalities including thermoacoustic and photoacoustic tomography,  ultrasound,  intravascular, radar and sonar imaging.  The inversion of such transforms is of natural interest. Unlike the full circle counterpart -- the circular Radon transform -- which has attracted significant attention in recent years, the circular arc Radon transforms are scarcely studied objects. We present an efficient algorithm that gives a numerical inversion of such transforms for the cases in which the support of the function lies entirely inside or outside the acquisition circle. The numerical algorithm is non-iterative and is very efficient as the entire scheme, once processed, can be stored and used repeatedly for reconstruction of images.

 

Year of completion:  July 2016
 Advisor : Prof Jayanthi Sivaswamy

Related Publications

  • Syed Tabish Abbas, Sivaswamy J - Latent Factor ModelBased Classification for Detecting Abnormalities in Retinal Images Proceedings of the 3rd IAPR Asian Conference on Pattern Recognition, 03-06 Nov 2015, Kuala Lumpur, Malaysia. [PDF]

  • Syed Tabish Abbas, Jayanthi Sivaswamy - Pet Image Reconstruction and Denoising on Hexagonal Lattices Proceedings of the IEEE International Conference on Image Processing, 27-30 Sep 2015,Quebec City, Canada. [PDF]

  • Syed T. A., Krishnan V. P. and Sivaswamy J. Numerical inversion of circular arc Radon transform (Under review).

Downloads

thesis

 ppt

Understanding and Describing Tennis Videos


Mohak Sukhwani (homepage)

Abstract

Our most advanced machines are like toddlers when it comes to sight.’ When shown a tennis video to kid, he mostly probably would blabber words like ‘tennis’, ‘racquet’, ‘ball’ etc. Similar is the case with present day state-of-art video understanding algorithms. We in this work try to solve one such multimedia content analysis problem – ‘How to get machines go beyond object and action recognition and make them understand lawn tennis video content in a holistic manner ?’. We propose a multi-facet approach to understand the video content as a whole - (a) Low level Analysis: Identify and isolate court regions and players (b) Mid Level Understanding: Recognize players actions and activities (c) High Level Annotations: Generate detailed summary of event comprising of information from full game play.

Annotating visual content with text has attracted significant attention in recent years. While the focus has been mostly on images, of late few methods have also been proposed for describing videos. The descriptions produced by such methods capture the video content at certain level of semantics. However, richer and more meaningful descriptions may be required for such techniques to be useful in real-life applications. We make an attempt towards this goal by focusing on a domain specific setting – lawn tennis videos. Given a video shot from a tennis match, we intend to predict detailed (commentary-like)  descriptions rather than small captions. Rich descriptions are generated by leveraging a large corpus of human created descriptions harvested from Internet. We evaluate our method on a newly created tennis video data set comprising of broadcast video recordings of matches from London Olympics 2012. Extensive analysis demonstrate that our approach addresses both semantic correctness as well as readability aspects involved in the task.

Given a test video, we predict a set of action/verb phrases individually for each frame using the features computed from its neighborhood. The identified phrases along with additional meta-data are used to find the best matching description from the commentary corpus. We begin by identifying two players on the tennis court. Regions obtained after isolating playing court regions assist us in segmenting out the candidate player regions through background subtraction using thresholding and connected component analysis. Each candidate foreground region thus obtained is represented using HOG descriptors over which a SVM classifier is trained to discard non-player foreground regions. The candidate player regions thus obtained are used to recognize players using using CEDD descriptors and Tanimoto distance.Verb phrases are recognized, by extracting features from each frame of input video using sliding window. Since this typically results into multiple firings, non-maximal suppression (NMS) is applied.

This removes low-scored responses that are in the neighborhood of responses with locally maximal confidence scores. Once we get potential phrases for all windows along with their scores, we remove the independence assumption and smooth the predictions using an energy minimization framework. For this, a Markov Random Field (MRF) based model is used which captures dependencies among nearby phrases. We formulate the task of predicting the final description, as an optimization problem of selecting the best sentence among the set of commentary sentences in corpus which covers most number of unique words in obtained phrase set. We even employ Latent Semantic Indexing (LSI) technique while matching predicted phrases with descriptions and demonstrate its effectiveness over naive lexical matching. The proposed pipeline is bench-marked against state-of-the-art methods. We compare our performance with recent methods. Caption generation based approaches achieve significantly low score owing to their generic nature. Compared to all the competing methods, our approach consistently provides better performance. We validate that in domain specific settings, rich descriptions can be produced even with small corpus size.

The thesis introduces a method to understand and describe the contents of lawn tennis videos. Our approach illustrates the utility of the simultaneous use of vision, language and machine learning techniques in a domain specific environment to produce human-like descriptions. The method has direct extensions to other sports and various other domain specific scenarios. With deep learning based approaches becoming a de-facto standard for any modern machine learning task, we wish to explore them for present task in future augmentations. The flexibility and power of such structures have made them outperform other methods in solving some really complex vision problems. Large scale deployments of combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have already surpassed other comparable methods for real time image summarization. We intend to exploit the power of such combined structures in VIDEO TO TEXT regime and generate real time commentaries for the game-videos as one of the proposed future extensions.

 

Year of completion:  June 2016
 Advisor :

Prof. C. V. Jawahar


Related Publications

  • Mohak Sukhwani, C. V. Jawahar - Tennis Vid2Text : Fine-Grained Descriptions for Domain Specific Videos Proceedings of the 26th British Machine Vision Conference, 07-10 Sep 2015, Swansea, UK. [PDF]

  • Mohak Sukhwani, C. V. Jawahar - Frame level Annotations for Tennis Videos Proceedings of the 23rd International Conference on Pattern Recognition, 4-8 December 2016, Cancun, Mexico.

Downloads

thesis

 ppt

Playing Poseidon: A Lattice Boltzmann Approach to Simulating Generalised Newtonian Fluids


Nitish Tripathi (homepage)

Abstract

Imitating the behaviour and characteristics of fluids with the help of a computer is called fluid simulation. Real world fluids are fickle. They are subtle and gentle at times; at times they are ravenous and tumultuous. Needless to say, complex equations are behind even the tiniest of ripple, so much so that often fluid mechanics has been described as ”the physicists nightmare”. Yet there are few, if any, substances which are so beautiful and graceful in motion to observe. To an ardent student of hydrodynamics, everything in the discernible world is fluid. Solids may just be classified as fluids which flow extremely slowly! Given time, every substance has the tendency to flow under the influence of an external force.

The history of fluid simulations, thus, rightly, begins with the formulation of the Navier Stokes’ equations. These were a set of partial differential equations originally developed in the 1840s on the basis of conservation laws and first order approximations. What followed was the Conventional study of fluid flows for more than a century. Arriving at computational models to solve fluid equations has been a subject of research since the early 1950s. Finding solutions to the partial differentials of Navier Stokes’ equations using discrete algorithms was area of focus. Many modern day techniques, of which some will be skimmed through in the succeeding chapters, came up during that time. Staggered marker-and-cell (MAC grid structure), Particle in Cell (PIC method) etc. are two of those.

However, most of the models and techniques developed by the CFD community then was complex and unscalable for visual effects oriented computer graphics. In the succeeding years, fluid effects was generated using non-physics based methods, such as using hand drawn animation (key frame animation) or displacement mapping.

The development of fluid simulations has traditionally been in two concurrent streams, viz., Eulerian and Lagrangian Simulations. Eulerian Method involves modelling the fluid as a collection of scalar fields (density, pressure etc.) and vector fields (velocity etc.). Each field is calculated using Navier Stokes’ equations and fluid is visualised as crossing the volume at fixed grid points, where the value of each field is known. Lagrangian simulations on the other hand take a more intuitive approach. They treat fluid particles as carriers of the field values in accordance with the Navier Stokes’ equations. The dependence of both the methods on Navier Stokes’ makes them essentially top down simulations methods - these methods look at what the perceptible fluid properties are without concerning themselves about the kind of particular interactions which give rise to the said properties.

Lattice Boltzmann Method developed around the same time but did not come into widespread usage until much later. Unlike the conventional methods, it is a statistical method based on Kinetic Theory. It treats fluids as a collection of logical mesoscopic particles. These are constrained to move in a discrete set of directions across a cartesian grid. They follow continuous alternating iterations of colliding at each grid centre and redistribution around it, and, progressing to the neighbouring centre. It was shown that for a particular kind of collisions these particular interactions give rise to Navier Stokes’ properties at the macro level. However, we can tweak certain parameters during the development so that quantities, taken to essentially be constants in the final Navier Stokes’ equations, can be varied to simulate fluids outside their scope. As we will see in the succeeding chapters, such fluids are in abundance around us and are called non-Newtonian fluids. The Navier Stokes’ equation, as will also be seen in succeeding chapters, are essentially Newton’s second law of motion. It is therefore unfit to deal with fluids which are non-Newtonian in nature and requires regular tweaking to model them.

In this work, we combine physical models of non-Newtonian fluids with Lattice Boltzmann Method. We give the CPU implementation, showing how easy it is to understand and code. We show how the method, inspite of its ease of implementation, doesn’t compromise on physical realism or accuracy. We also give a model for GPU implementation for increased efficiency and interactive frame rates.

 

Year of completion:  June 2016
 Advisor :

Prof. P. J. Narayanan


Related Publications

  • Tripathi, N. and Narayanan, P.J. Generalized newtonian fluid simulations. 2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG-2013). Jodhpur, India. [PDF]
  • Jain, Somay and Tripathi, Nitish and Narayanan, P. J. Interactive Simulation of Generalised Newtonian Fluids Using GPUs. Proceedings of the 2014 Indian Conference on Computer Vision Graphics and Image Processing (ICVGIP-2014). Bangalore, India. [PDF]

Downloads

thesis

 ppt

Diversity in Image Retrieval using Randomization and Learned Metrics


P Vidyadhar Rao (homepage)

Abstract

Providing useful information to user requested queries is the most investigated problem in the multi-media retrieval community. The problem of information retrieval usually has many possible solutions, due to uncertainties in the user's information need and ambiguities in query specification. Some mechanism is required to evaluate the options and select a solution. This is quite a challenging task. In the recent years, the focus is gradually shifting towards relevance and diversity of retrieved information, which together improve the usefulness of retrieval system as perceived by users. Intuitively it is desirable to design a retrieval system with three requirements: a) Accurate retrieval i.e., the method should have high precision, b) Diverse retrieval, i.e., the obtained results should be diverse, c) Efficient retrieval, i.e., response time should be small. While considerable effort has been expended to develop algorithms which incorporate both relevance and diversity in the retrieval process, relatively less attention has been given to the problem of finding efficient diverse retrieval algorithms.

The main contribution of this thesis lies in developing efficient algorithms for the diverse retrieval problem. We show that the diverse retrieval problem can be mathematically defined as an integer convex optimization problem, and hence finding the optimal solution is NP-Hard. The existing approximate and greedy algorithms that try to find solution to this problem suffer from two drawbacks: a) Running time of the algorithms is very high as it is required to recover several exact nearest neighbors. b) Computations may require an unreasonably large amount of memory overhead for large datasets. In this work, we propose a simple approach to overcome all the above simultaneously based on two ideas: 1) Randomization and 2) Learned Metrics.

In the first case, the method is based on locality sensitive hashing and tries to address all of the above requirements simultaneously. We show that the effectiveness of our method depends on randomization in the design of the hash functions. Further, we derive a theoretically sound result to support the intuitiveness and reliability of using hash functions (via randomization) in the retrieval process to improve diversity. We modify the standard hash functions to take into account the distribution of the data for better performance. We also formulate the diverse multi-label prediction(of images and web pages) in this setting and demonstrate the scalability and diversity in the solution. We demonstrate effectiveness of our approach in three tasks: Image Category Retrieval, Multi-label Classification and Image Tagging. Our findings show that the proposed hash functions in combination with the existing diversity-based methods significantly outperforms standard methods without using hash functions. Our method allows to achieve a trade-off between accuracy and diversity using easy to tune parameters. We examine evaluation measures for diversity in several retrieval scenarios and introduce a new notion to simultaneously evaluate a method's performance for both the precision and diversity measures. Our proposal does not harm, but instead increases the reliability of the measures in terms of accuracy and diversity while ensuring $100x$-speed-up over the existing diverse retrieval approaches.

In the second case, the method is based on learning distance metrics. We show that effectiveness of our method depends on the learned distance metrics that suits the user's interest. In the case of instance based image retrieval methods, relevance and diversity are relative to users viewpoint of the camera, time of day, and camera zoom. We argue that the low-level image features fail to capture diversity with respect to high-level human semantics. We, therefore, use the high-level semantic information to learn metrics and re-fashion the visual feature space to appreciate diversity better in the retrieval. Our experiments confirm that our proposal is the best strategy from a learning perspective, when compared to original feature space, that the learned metrics provide better diversity in the retrieval.

In conclusion, in this thesis we discussed two fundamental ideas for retrieving diverse set of results. From the algorithmic and statistical perspective, the proposed method intuitively uses "randomness as resource" to improve diversity in retrieval while ensuring sub-linear retrieval time. From the visual perspectives, the proposed method utilizes user level semantics to "learn metrics" for improving diversity in instance based image retrieval. We believe that the ideas presented in this thesis are not limited to image retrieval and therefore, its applicability to different definitions of diversity (visual, temporal, spatial, and topical aspects), knowledge source combination (image and text), interactive retrieval systems (relevance feedback), and so forth are possible.

 

Year of completion:  October 2015
 Advisor :

C.V. Jawahar


Related Publications

  • Vidyadhar Rao, Ajitesh Gupta, Visesh Chari, C. V. Jawahar - Learning Metrics for Diversity in Instance Retrieval Proceedings of the 5th National Confernece on Computer Vision, Pattern Recognition, Image Processing and Graphics, 16-19 Dec 2015, Patna, India. [PDF]

  • Vidyadhar Rao and C V Jawahar - Semi-Supervised Clustering by Selecting Informative Constraints Proceedings of 5th International Conference on Pattern Recognition and Machines Intelligence, 10-14 Dec. 2013, Kolkata, India. [PDF]


Downloads

thesis

 ppt

More Articles …

  1. Time Frequency Analysis for Motion Magnification and Detection
  2. Synthesizing Classifiers for Novel Settings
  3. Towards Efficient Methods for Word Image Retrieval
  4. Efficient Ray Tracing of Parametric Surgace for Advance Effects
  • Start
  • Prev
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. Thesis
  5. Thesis Students
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.