CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Mining Characteristic Patterns From Visual Data


Abhinav Goel

Recent years has seen the emergence of thousands of photo sharing websites on the Internet where billions of photos are being uploaded every day. All this visual content boasts a large amount of information about people, objects and events all around the globe. It is a treasure trove of useful information and is readily available at the click of a button. At the same time, significant effort has been made in the field of text mining, giving birth to powerful algorithms to extract meaningful information scalable to large datasets. This thesis leverages the strengths of text mining methods to solve real world computer vision problems. Leveraging such techniques to interpret images is accompanied by its own set of challenges. The variability in feature representations of images makes it difficult to match images of the same object. Also, there is no prior knowledge about the position or scale of the objects that have to be mined from the image. Hence, there are infinite candidate windows which have to be searched.

The work at hand tackles these challenges in three real world settings. We first present a method to identify the owner of a photo album taken off a social networking site. We consider this as a problem of prominent person mining. We introduce a new notion of prominent persons, where information about the location, appearance and social context is incorporated into the mining algorithm to be effectively able to mine the most prominent person. A greedy solution based on an eigenface representation is proposed and We mine prominent persons in a subset of dimensions in the eigenface space. We present excellent results on multiple datasets - both synthetic as well as real world datasets downloaded from the Internet.

We next explore the challenging problem of mining patterns from architectural categories. Our min- ing method avoids the large numbers of pair-wise comparisons by recasting the mining in a retrieval setting. Instance retrieval has emerged as a promising research area with buildings as the popular test subject. Given a query image or region, the objective is to find images in the database containing the same object or scene. There has been a recent surge in efforts in finding instances of the same building in challenging datasets such as the Oxford 5k dataset, Oxford 100k dataset and the Paris dataset. We leverage the instance retrieval pipeline to solve multiple problems in computer vision. Firstly, we ascend one level higher from instance retrieval and pose the question: Are Buildings Only Instances? Buildings located in the same geographical region or constructed in a certain time period in history often follow a specific method of construction. These architectural styles are characterized by certain features which distinguish them from other styles of architecture. We explore, beyond the idea of buildings as instances, the possibility that buildings can be categorized based on the architectural style. Certain characteristic features distinguish an architectural style from others. We perform experiments to evaluate how characteristic information obtained from low-level feature configurations can help in classification of buildings into architectural style categories. Encouraged by our observations, we mine characteristic features with semantic utility for different architectural styles from our dataset of European monuments. These mined features are of various scales, and provide an insight into what makes a particular architectural style category distinct. The utility of the mined characteristics is verified from Wikipedia.

We finally generalize the mining framework into an efficient mining scheme applicable to a wider varieties of object categories. Often the location and spatial extent of an object in an image is unknown. The matches between objects belonging to the same category are also approximate. Mining objects in such a setting is hard. Recent methods model this problem as learning a separate classifier for each category. This is computationally expensive since a large number of classifiers are required to be trained and evaluated, before one can mine a concise set of meaningful objects. On the other hand, fast and efficient solutions have been proposed for the retrieval of instances (same object) from large databases. We borrow, from instance retrieval pipeline, its strengths and adapt it to speed up category mining. For this, we explore objects which are “near-instances”. We mine several near-instance object categories from images obtained from Google Street View. Using an instance retrieval based solution, we are able to mine certain categories of near-instance objects much faster than an Exemplar SVM based solution. (more...)

 

Year of completion:  August 2012
 Advisor : C. V. Jawahar

Related Publications

  • Abhinav Goel, Mayank Juneja and C V Jawahar - Are Buildings Only Instances? Exploration in Architectural Style Categories Proceedings of the 8th Indian Conference on Vision, Graphics and Image Processing, 16-19 Dec. 2012, Bombay, India. [PDF]

  • Abhinav Goel and C.V. Jawahar - Whose Album is this? Proceedings of 3rd National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, ISBN 978-0-7695-4599-8, pp.82-85 15-17 Dec. 2011, Hubli, India. [PDF]

  • Abhinav Goel, Mayank Juneja, C.V. Jawahar - Leveraging Instance Retrieval for Efficient Category mining Computer Vision and Pattern Recognition Workshops, 2013 [PDF]

Downloads

thesis

ppt

Instance Retrieval and Image Auto-Annotations on Mobile Devices


Jay Guru Panda (homepage)

UseCaseImage matching is a well studied problem in the computer vision community. Starting from template matching techniques, the methods have evolved to achieve robust scale, rotation and translation invariant matching between two similar images. To this end, people have chosen to represent images in the form of a set of descriptors extracted at salient local regions that are detected in a robust, invariant and repeatable manner. For efficient matching, a global descriptor for the image is computed either by quantizing the feature space of local descriptors or using separate techniques to extract global image features. With this, effective indexing mechanisms are employed to perform efficient retrieval on large image databases.

Successful systems have been put in place in desktop and cloud environments to enable image search and retrieval. The retrieval takes fraction of a second on a powerful desktop or a server. However, such techniques are typically not well suited for less powerful computing devices such as mobile phones or tablets. These devices have small storage capacity and the memory usage is also limited. Computer vision algorithms run slower, even when optimized for the architecture of mobile processors. These handheld devices, or so-called smart devices are increasingly used for simple tasks that seem too trivial for a desktop or a laptop and can be easily accessed on a smaller display. Further, they are more popularly used for taking pictures (gradually replacing the space of digital cameras) owing to the improved embedded camera sensors. Hence, a user is more likely to use a query image from the mobile phone, rather than from the desktop. This increases the scope of applications that demand real-time search and retrieval result delivered on a mobile phone.

Many applications (or apps) on mobile smart phones communicate with the cloud to perform tasks that are infeasible on the device. People have attempted to retrieve images in this cloud-based model by either sending the image or its features to the server and receiving back relevant information. We are interested to solve this problem on the device itself with all the necessary computations happening on the mobile processor. It allows a user to not bother for a consistent network connection and the communication overheads associated with the search process. We address the range of applications that need simple text annotations to describe the image queried on the mobile. An interesting use case is a tourist/student/historian visiting a heritage site and can get all information about the monuments and structures on his mobile phone. Once the app is initialized on the device, the camera is opened and just pointing the camera or with a single click all the useful info about the monument is displayed on the screen instantly. The app doesn’t use the internet for communicating with any server and should do all computations on the mobile phone itself. Our methods optimize the process of instance retrieval to enable quick and light-weight processing on a mobile phone or a tablet. (more...)

 

Year of completion:  December 2013
 Advisor : C. V. Jawahar

Related Publications

  • Jayaguru Panda, Michael S Brown and C V Jawahar - Offline Mobile Instance Retrieval with a Small Memory Footprint Proceedings of International Conference on Computer Vision, 1-8th Dec.2013, Sydney, Australia. [PDF]

  • Jayaguru Panda, Shashank Sharma, C V Jawahar - Heritate App: Annotating Images on Mobile Phones Proceedings of the 8th Indian Conference on Vision, Graphics and Image Processing, 16-19 Dec. 2012, Bombay, India. [PDF]

  • J Panda, C. V. Jawahar - Heritage App: Annotating Images on Mobile Phones IAPR Second Asian Conference on Pattern Recognition (ACPR2013), Okinawa (Japan), November, 2013 [PDF]

Downloads

thesis

ppt

A Framework for Community Detection from Social Media.


Chandrashekar V (homepage)

The past decade has witnessed the emergence of participatory Web and social media, bringing together people in many creative ways. Millions of users are playing, tagging, working, and socializing online, demonstrating new forms of collaboration, communication, and intelligence that were hardly imaginable just a short time ago. Social Media refers to interaction among people in which they create, share and exchange information and ideas in virtual communities and networks. Social Media also helps reshape business models, sway opinions and emotions, and opens up numerous possibilities to study human interaction and collective behavior in an unparalled scale.

In the study of complex networks, a network is said to have community structure if the nodes can be easily grouped into sets of nodes (even overlapping) such that each set of nodes is densely connected internally. Community structure are quite common in real networks. Social Networks often include community groups based on common location, interests, occupation etc. Metabolic Networks have communities based on functional groupings. Citation Networks form communities by research topic. Being able to identify these sub-structures within a network can provide insight into how network function and topology affect each other.

In this thesis, we design an end-to-end framework for identifying communities from raw, noisy social media data. The framework is composed of two important phases. First, we introduce a new method of converting the raw, noisy social media data into a weighted entity-entity co-occurrence based consistency network. This includes a simple iterative noise removal procedure for cleaning the entity consistency network by removing noisy entity pairs. Secondly, we propose an approach for identifying coherent communities from the weighted entity network, by introducing novel notions of community-ness and community, based on eigenvector centrality.

We use this framework to solve three different problems from two distinct domains. The first problem involves detecting communities from raw social media data and showing the application of the communities discovered in a recommendation engine setting. We use the framework for converting the raw data into a clean network and propose a highly parallelizable seed based greedy algorithm to detect as many communities as possible from the weighted entity consistency network. Our framework for community detection is unsupervised, domain agnostic, noise robust, computationally efficient and can be used in different Web Mining applications like Recommendation Systems, Topic Detection, User Profiling etc. We also design an recommendation system to evaluate our framework with existing state-of-art frameworks on a variety of large real-world social media data - Flickr, IMDB, Wikipedia, Bibsonomy, Medline. Our results outperform other frameworks by a huge margin.

The second problem is, given a set of communities of discovered by traditional community detection methods, we need to identify loose communities among them and partition them into compact ones. Here, we use the second phase of our framework to identify such loose communities using our notion of community-ness and propose an algorithm for partitioning such loose communities into compact ones. We illustrate the results of our algorithm over Amazon Product and Flickr Tag data and compare its superiority over the traditional community detection methods in a recommendation engine setting.

The third problem is about showing the application of such framework in an Image Annotation scenario in presence of noisy labels. The problem of image annotation is defined to be, given an unknown image, we need to predict labels which best describes the semantics of the image. This problem is best solved in a supervised nearest neighbor setting, and we show how our framework can be used to address this problem, when the labels associated with training images can be noisy and redundant. (more...)

 

Year of completion:  August 2013
 Advisor : C. V. Jawahar & Shailesh Kumar

 

Related Publications

  • Chandra Shekar V, Shailesh Kumar and C. V. Jawahar - Image Annotation in Presense of Noisy Lables Proceedings of 5th International Conference on Pattern Recognition and Machines Intelligence, 10-14 Dec. 2013, Kolkata, India. [PDF]

  • Chandra Shekar V, Shailesh Kumar and C V Jawahar - Compacting Large and Loose Communities Proceedings of the 2nd Asian Conference Pattern Recognition, 05-08 Nov. 2013, Okinawa, Japan. [PDF]

  • Shailesh Kumar, Chandrashekar V, C. V. Jawahar - Logical Itemset Mining Proceedings of IEEE International Conference on Data Mining Workshop, 10-13 Dec. 2012, ISBN 978-1-4673-5164-5,Brussels, Belgium. [PDF]


Downloads

thesis

ppt

Efficient Texture Mapping by Homogeneous Patch Discovery


R. Vikram Pratap Singh (homepage)

All visible objects have shape and texture. The main aim of computer graphics is to represent and render real world objects efficiently and realistically. To make the objects look realistic from a geometric point of view, we have to make sure that the shape and texture of the object are accurate. In practice, shape is either hand crafted using 3D modeling tools such as Blender, or is acquired from real world objects using 3D reconstruction techniques. Texture is the second aspect of appearance that must be ensured to make the rendered objects look real. The texture has to be pasted on the surface in such a manner that it perceptual corresponds to the correct part of the mesh. This process of pasting the texture on the surface of a mesh model is called Texture mapping. Texture mapping can be done in two ways. First way is to texture a surface by synthesizing the texture directly on the surface. The second method is to wrap a synthesized texture around the surface and cut and merge the seams so that it fits correctly on the surface. To visualize this problem we can think of texture as a cloth. The first method can be thought of weaving around the body to fit it exactly like a sweater, while the second method is like cutting and stitching an already woven cloth according to the shape of the mesh model. In this thesis we propose a new method that follows the second approach. The primary goal of our method is to map a texture on to large mesh model at interactive rates, while maintaining the perceived quality.

The primary technique for mapping a flat texture (image) onto an arbitrary shaped mesh model is to parameterize the shape, which defines a mapping from points on the mesh surface onto a 2D plane. When parameterizing these mesh models, we try to maintain the geometric correspondence between the mesh vertices intact to reduce the distortion of the texture. Typically, parameterizing a mesh model involves solving a set of linear equations representing the geometric correspondence of the triangles. The approach involves defining an energy function for the mapping and searching for a global optimum which minimizes the distortions during the mapping. Such methods are capable to achieving texture mappings that has high perceptual quality. However, typical energy minimization procedures are computationally expensive and cannot be applied for real time applications or with large mesh models.

To complement the proposed texture mapping algorithm, we introduce a method to make a texture self tileable. This allows us to store only the texel structure, if the required texture is repetitive. We present qualitative and quantitative results in comparison with several other texture mapping algorithms. The proposed algorithm is robust in terms of the output quality and can find applications in different scenarios such as rapid prototyping, where you require interactive texture mapping rates and the ability to deal with dynamic mesh topology. It can also be used for applications such as large monument visualizations, where we need to deal with large and noisy mesh models that are generated using techniques such as multi-view stereo. (more...)


Some Results :

 

rabbitursulaDragonHorsePegasusBuddha

 

 

 

 

 

 

 

 

Year of completion:  June 2014
 Advisor :

Anoop M. Namboodiri


Related Publications

  • R. Vikram Pratap Singh, Anoop M Namboodiri - Efficient texture mapping by homogeneous patch discovery, ICVGIP 2012 (Oral).

Downloads

thesis

ppt

Image Mosaicing of Neonatal Retinal Images.


Akhilesh Bontala (homepage)

Image mosaicing is a data fusion technique used for increasing the field of view of an image. Deriving the mosaiced image entails integrating information from multiple images. Image mosaicing permits overcoming the limitations of a camera lens and help create a wide field of view image of a 3D scene and hence has a wide range of applications in various domains including medical imaging. This thesis concerns the task of mosaicing specific to neonatal retinal images for aiding the doctors in the diagnosis of Retinopathy of prematurity (ROP). ROP is a vascular disease that affects low birth-weight, premature, infants. The prognosis of ROP relies on information on the presence of abnormal vessel growth and fibrosis in periphery. Diagnosis is based on a series of images obtained from a camera (such as RetCam), to capture the complete retina. Typically, as many as 20 to 30 images are captured and examined for diagnosis. In this thesis, we present a solution for mosaicing the RetCam images so that a comprehensive and complete view of the entire retina can be obtained in a single image for ROP diagnosis. The task is challenging given that the quality of the images obtained is variable. Furthermore, the presence of large spatial shift across consecutive frames makes them virtually unordered.

We propose a novel, hierarchical system for efficiently mosaicing an unordered set of RetCam images. It is a two-stage approach in which the input images are first partitioned into subsets and images in each subset are spatially aligned and combined to create intermediate results. Given n images, the number of registrations required to generate a mosaic by conventional approaches to mosaicing is O(n2) whereas it is O(n) for the proposed system. These images are then again spatially aligned and combined to create a final mosaic. An alignment technique for low quality retinal images and a blending method for combining images based on vessel quality is also designed as part of this framework. Individual components of the system are evaluated and compared with other approaches. The overall system was also evaluated on a locally-sourced dataset consisting of neonatal retinal images of 10 infants with ROP. Quantitative results show that there is a substantial increase in the field of view and the vessel extent is also improved in the generated mosaics. The generated mosaics have been validated by the experts to provide sufficient information for the diagnosis of ROP. (more...)

 

Year of completion:  July 2014
 Advisor : Jayanthi Sivaswamy

Related Publications

  • Akhilesh Bontala, Jayanthi Sivaswamy and Rajeev R Pappura -Image Mosaicing of Low Quality Neonatal Retinal Images Proceedings of IEEE International Symposium on Biomedical Imaging 2-5 May. 2012, ISBN 978-1-4577-1858-8, pp. 720-723, Barcelona, Spain. [PDF]


Downloads

thesis

ppt

More Articles …

  1. Combining Data Parallelism and Task Parallelism for Efficient Performance on Hybrid CPU and GPU Systems
  2. Repetition Detection and Shape Reconstruction in Relief Images.
  3. Solving Decomposition Problems in Computer Vision using Linear Optimization
  4. Learning Semantic Interaction Among Indoor Objects
  • Start
  • Prev
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. Thesis
  5. Thesis Students
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.