CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Techniques for Organization and Visualization of Community Photo Collections


Kumar Srijan (homepage)

Due to the digital and information revolution we are witnessing presently, there are a huge and continously increasing number of images present on the Internet. For example, a query for ``Eiffel Tower" on Google Images returns more than two million images. The easy accessiblity of this data provides us with unique opportunities to mine the contents of these images not only to do automatic organization, but also for providing interactive interfaces to browse, explore and query. This task is challenging given the massive size and the continous growth of the collection. To add to this, these collections are taken in varying imaging conditions, with different cameras, at different resolutions, from different perspectives and have different degrees of occlusions present in them. Hence, for image collections even the simplest of tasks such as finding matching images turn out to be hard.

The Computer Vision community has been actively designing and redesigning algorithms to overcome these challenges. One of the most widespread and noticable idea employed is that of extracting robust, invariant and repeatable local features in the images, followed by the subsequent quantization of the feature space as visual words. The similarity of images is gauged by the correspondence and similarity of thier local features. Verifications of the matchings is done to eliminate spurious matches. Building a data structure such as an inverted index over these visual words can catalyse the process of discovery of matching features. This mining of similar images by matching features, forms the basis of all high level algorithms such as clustering, skeletonization, summarization etc. which help in the organization, exploration and querying of these image collections. This thesis presents two novel algorithms which help in achieving this goal.

First, we introduce a novel indexing scheme that makes it possible to do exhaustive pairwise matching in large image collections. The quantization of image features and thier indexing provide on a limited amount of leverage for speeding up the image matching process which depends upon the sparsity the posting lists. This sparsity is controlled by the number of visual words used which after a point cannot be increased arbitrarily without affecting recall. Our scheme, generates higher order features by pairing up nearby features and encoding their affine geometry. This provides a much larger feature space to index which can be subseqently reprojected to any desired size by defining appropriate hash functions. We implement our indexing scheme by providing an analogy with Bloom filters. The higher order features extracted in the images are inserted into their respective equally sized Bloom filters using a single hash function. This unformity in Bloom filters allows for only a single inverted index to be able to index the hash buckets of all the Bloom filters, and thus providing a simplified interface to implicity query all the Bloom filters. We choose the size of these Bloom filters to be in proportion to the size of the database. This enables us to do querying in constant time, since the average size of the posting lists becomes constant. Also, the use of such large implicit Bloom filters is able to sufficiently mitigate the negative effects of using a single hash functions. As a result, we are able to do exhaustive pairwise matching over large databases of upto 100K images in linear time complexity.

Second, we present a fast and easy to implement framework for browsing large image collections of landmarks and monumental sites. The existing framework ``Phototourism" would require doing a reconstruction of the whole scene by employing Structure from Motion package called Bundler. This requires pairwise matching required to generate tracks of matching features across images. Next an incremental approach is applied, starting with a seed reconstruction and adding more matching images into the reconstruction. This, however, requires continous refinement of the whole reconstruction using a computationally expensive procedure called bundle adjustment. The pairwise matching and bundle adjustment become the limiting factors in scaling this technique to large image collections.

To overcome the issues faced with ``Phototourism", our framework employs independent partial reconstructions of the scene. We use standard Bag of words model and indexing techniques to determine closest neighbours of each image in the collection, and do a local reconstruction corresponding to each image using only the neighbouring images. This requires us to only solve multiple simple reconstructions problems instead of one large reconstruction problem, making it computationally more tractable. Our browsing interface hops from one reconstruction to another to give the user an illusion of browsing a global reconstruction. Our approach also makes it easy to adapt to growing image collections, as adding an image only incurs a cost of creating a new independent reconstruction. We validate our approach with a Golkonda Fort image dataset consisting of 6K images.

In summary, the techniques presented in this thesis for organizing large image collections tries to solve the problem of doing exhausitive pairwise matching in image collection in a scalable manner, for which a novel indexing scheme is proposed. We also present a novel technique for overcoming the problems faced while doing ``Structure from Motion'' on large image collections. We hope that these techniques will find application for browsing and mining matching images in large image collections, and also in creating virtual experiences of several monuments and sites across the globe. (more...)

 

Year of completion:  July 2013
 Advisor : C. V. Jawahar

Related Publications


Downloads

thesis

ppt

Patient-Motion Analysis in Perfusion Weighted MRI.


Rohit Gautam (homepage)

Information about blood flow in the brain is of interest to detect the presence of blockages and ruptures in the vessel network. A standard way of gathering this information is to inject a bolus of contrast agent into the blood stream and imaging over a period of time. The imaging is generally done over an extended period of time (tens of minutes) during which a patient can move which in turn results in corruption of the acquired time series of volumes. This problem is often observed in dynamic magnetic resonance (MR) imaging. Correction for motion after scanning is a highly time-intensive process since it involves registering each volume to a reference volume. Moreover, the injected contrast alters the signal intensity as a function of time and often confounds traditional motion correction algorithms. In this thesis, we present a fast and efficient solution for motion correction in 3D dynamic susceptibility contrast (DSC) MR images. We present a robust, multi-stage system based on a divide and conquer strategy consisting of the following steps: i) subdivision of the time series data into bolus and non-bolus phases depending on the status of bolus in the brain, ii) 2D block-wise phase correlation for detecting motion between adjacent volumes and categorizing the corruption into four categories: none, minimal, mild and severe depending on the degree of motion and iii) a 2-pass, 3D registration consisting of intra-set and inter-set registrations to align the motion corrupted volumes. The subdivision of time-series into distinct sets is achieved using Gamma variate function (GVF) fitting. The dynamic non-uniform variation in signal intensity due to the injected bolus is handled by employing a clustering-based identification of bolus-affected pixels followed by correction of their intensity using the above GVF fitting.

The proposed system was evaluated on a real DSC MR sequence by introducing motion of varying degrees. The experimental results show that the entropy of the derived motion fields is a good metric for detecting and categorizing the motion. The evaluation of motion correction using the dice coefficient measure shows that the system is able to remove motion accurately and efficiently. The efficiency is contributed to by the proposed detection as well as the correction strategy. Including the detection prior to existing correction methods achieved a savings of 37% in computation time. Whereas, when the detection is combined with the proposed correction stage, the savings increase to 63%. Notably, the above performance was found to be had with no trade-off between accuracy and computation cost. (more...)

 

Year of completion:  October 2013
 Advisor : Jayanthi Sivaswamy

Related Publications


Downloads

thesis

ppt

Bag of Words and Bag of Parts models for Scene Classification in Images.


Mayank Juneja (homepage)

Scene Classification has been an active area of research in Computer Vision. The goal of scene classification is to classify an unseen image into one of the scene categories, e.g. beach, cityscape, auditorium, etc. Indoor scene classification in particular is a challenging problem because of the large variations in the viewpoint and high clutter in the scenes. The examples of indoor scene categories are corridor, airport, kitchen, etc. The standard classification models generally do not work well for indoor scene categories. The main difficulty is that while some indoor scenes (e.g. corridors) can be well characterized by global spatial properties, others (e.g. bookstores) are better characterized by the objects they contain. The problem requires a model that can use a combination of both the local and global information in the images. Motivated by the recent success of the Bag of Words model, we apply the model specifically for the problem of Indoor Scene Classification. Our well-designed Bag of Words pipeline achieves the state-of-the-art results on the MIT 67 indoor scene dataset, beating all the previous results. Our Bag of Words model uses the best options for every step of the pipeline. We also look at a new method for partitioning of images into spatial cells, which can be used as an extension to the standard Spatial Pyramid Technique (SPM). The new partitioning is designed for scene classification tasks, where a non-uniform partitioning based on the different regions is more useful than the uniform partitioning.

We also propose a new image representation which takes into account the discriminative parts from the scenes, and represents an image using these parts. The new representation, called Bag of Parts can discover parts automatically and with very little supervision. We show that the Bag of Parts representation is able to capture the discriminative parts/objects from the scenes, and achieves good classification results on the MIT 67 indoor scene dataset. Apart from getting good classification results, these blocks correspond to semantically meaningful parts/objects. This mid-level representation is more understandable compared to the other low-level representations (e.g. SIFT) and can be used for various other Computer Vision tasks too. Finally, we show that the Bag of Parts representation is complementary to the Bag of Words representation and combining the two gives an additional boost to the classification performance. The combined representation establishes a new state-of-the-art benchmark on the MIT 67 indoor scene dataset. Our results outperform the previous state-of-the-art results by 14%, from 49.40% to 63.10%. (more...)

image

 

Year of completion:  October 2013
 Advisor : C. V. Jawahar & Andrew Zisserman

Related Publications

  • Mayank Juneja, Andrea Vedaldi, C V Jawahar and Andres Zisserman - Blocks that Shout: Distinctive Parts for Scene Classification Proceedings of the International Conference on Computer Vision and Pattern Recognition, 23-28 June. 2013, Oregon, USA. [PDF]

  • Abhinav Goel, Mayank Juneja and C V Jawahar - Are Buildings Only Instances? Exploration in Architectural Style Categories Proceedings of the 8th Indian Conference on Vision, Graphics and Image Processing, 16-19 Dec. 2012, Bombay, India. [PDF]


Downloads

thesis

ppt

Analysis of Stroke on Brain Computed Tomography Scans.


Saurabh Sharma

Abstract Stroke is one of the leading causes of death and disability in the world. Early detection of Stroke (both hemorrhagic and ischemic) is very important as it can ensure up to full recovery. Timely detection of stroke, especially ischemic stroke is difficult as the changes in abnormal tissue only become visible after the damage has already been done. The detection is even more difficult on CT scan compared to other imaging modalities but the dependence of a large fraction of population on CT, makes the need to find a solution to the problem even more imperative. Though the detection accuracy of radiologists for early stroke depends on various factors like experience, available technology, etc., earlier estimates put the accuracy around 10% [45]. Even with considerable advancement in CT technology the performance has still only increased to around 70% or thereabouts [21]. Any kind of assistance to radiologists which can improve their detection accuracy would therefore be much appreciated.

This thesis presents a framework for automatic detection and classification of different types of stroke. We characterize stroke as a distortion in the otherwise contralaterally similar distribution of brain tissue. Classification depends on the severity of the distortion with hemorrhage and chronic infarcts exhibiting the maximum distortion and hyperacute stroke showing the minimum. The detection work on hemorrhagic stroke and early ischemic stroke has clinical value whereas the work on later stages of ischemic stroke has mainly academic use. The automatic detection approach was tested on a dataset containing 19 normal (291 slices) and 23 abnormal (181 slices) datasets. The algorithm gave a high recall rate for hemorrhage (80%), chronic (95%), acute (91.80%) and hyperacute (82.22%) stroke at slice level. The corresponding precision figures were 93.3%, 90.47%, 87.5% and 69.81% respectively. The performance of the system in a normal vs. stroke-affected scenario was 83.95% precision and 86.74% recall. The lower precision value in case of hyperacute scans is because of large number of normal slices with slight disturbances in contra-lateral symmetry being identified as stroke cases. We also present a novel approach for enhancement of early ischemic stroke regions using image-adaptive window parameters, to aid the radiologists in the manual detection of early ischemic stroke. The enhancement approach increased the average accuracy of radiologists in clinical conditions from around 71% to around 90% (p=0.02, two tailed student's t test) with the inexperienced radiologists benefiting more from the enhancement. The average reviewing time of the scans was also reduced from about 9 to 6 seconds per slice. Out of the two approaches, automatic detection and enhancement, results show the enhancement process to be more promising. (more...)

 

Year of completion:  October 2013
 Advisor : Jayanthi Sivaswamy

Related Publications

  • Saurabh Sharma, Sivaswamy Sivaswamay, Power Ravuri and L.T. Kishore -  Assisting Acure Infarct Detection from Non-contract CT using Image Adaptive Window Setting Proceedings of 14th Conference on Medical Image Perception (MIPS 2011),09-11 Aug. 2011, Dublin, Ireland. [PDF]

  • Mayank Chawla, Saurabh Sharma, Jayanthi Sivaswamy and Kishore L.T - A Method for Automatic Detection and Classification of Stroke from Brain CT Images Proceedings of 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society(EMBC 09), 2-6 September, 2009, Minneapolis, USA. [PDF]

  • S. Sharma, J. Sivaswamy - Automatic detection of early infarct from brain CT International Symposium on Medical Imaging in conjunction with ICVGIP, 2010 [PDF]

Downloads

thesis

ppt

Multi-modal Semantic Indexing for Image Retrieval.


Pulla Lakshmi Chandrika (homepage)

Many image retrieval schemes generally rely only on a single mode, (either low level visual features or embedded text) for searching in multimedia databases. In text based approach, the annotated text is used for indexing and retrieval of images. Though they are very powerful in matching the context of the images but the cost of annotation is very high and the whole process suffers from the subjectivity of descriptors. In content based approach, the indexing and retrieval of images is based on the visual content of the image such as color, texture, shape, etc. While these methods are robust and effective they are still bottlenecked by semantic gap. That is, there is a significant gap between the high-level concepts (which human perceives) and the low-level features (which are used in describing images). Many approaches(such as semantic analysis) have been proposed to bridge this semantic gap between numerical image features and richness of human semantics

Semantic analysis techniques were first introduced in text retrieval, where a document collection can be viewed as an unsupervised clustering of the constituent words and documents around hidden or latent concepts. Latent Semantic Indexing (LSI), probabilistic Latent Semantic Analysis (pLSA), Latent Dirichlet Analysis (LDA) are the popular techniques in this direction. With the introduction of bag of words (BoW) methods in computer vision, semantic analysis schemes became popular for tasks like scene classification, segmentation and content based image retrieval. This has shown to improve the performance of visual bag of words in image retrieval. Most of these methods rely only on text or image content.

Many popular image collections (eg. those emerging over Internet) have associated tags, often for human consumption. A natural extension is to combine information from multiple modes for enhancing effectiveness in retrieval.

The enhancement in performance of semantic indexing techniques heavily depends on the right choice of number of semantic concepts. However, all of them require complex mathematical computations involving large matrices. This makes it difficult to use it for continuously evolving data, where repeated semantic indexing (after addition of every new image) is prohibitive. In this thesis we introduce and extend, a bipartite graph model (BGM) for image retrieval. BGM is a scalable datastructure that aids semantic indexing in an efficient manner. It can also be incrementally updated. BGM uses tf-idf values for building a semantic bipartite graph. We also introduce a graph partitioning algorithm that works on the BGM to retrieve semantically relevant images from a database. We demonstrate the properties as well as performance of our semantic indexing scheme through a series of experiments.

Then , we propose two techniques: Multi-modal Latent Semantic Indexing (MMLSI) and Multi-Modal Probabilistic Latent Semantic Analysis (MMpLSA). These methods are obtained by directly extending their traditional single mode counter parts. Both these methods incorporate visual features and tags by generating simultaneous semantic contexts. The experimental results demonstrate an improved accuracy over other single and multi-modal methods.

We also propose, a tri-partite graph based representation of the multi model data for image retrieval tasks. Our representation is ideally suited for dynamically changing or evolving datasets, where repeated semantic indexing is practically impossible. We employ a graph partitioning algorithm for retrieving semantically relevant images from the database of images represented using the tripartite graph. Being "just in time semantic indexing", our method is computationally light and less resource intensive. Experimental results show that the data structure used is scalable. We also show that the performance of our method is comparable with other multi model approaches, with significantly lower computational and resources requirements (more...)

 

Year of completion:  December 2013
 Advisor : C. V. Jawahar

Related Publications

  • Chandrika Pulla and C.V. Jawahar - Tripartite Graph Models for Multi Modal Image Retrieval Proceedings of 21st British Machine Vision Conference (BMVC'10),31 Aug. - 3 Sep. 2010, Aberystwyth, UK. [PDF]

  • Chandrika Pulla, Suman Karthik and C.V. Jawahar - Efficient Semantic Indexing for Image Retrieval Proceedings of 20th International Conference on Pattern Recognition (ICPR'10),23-26 Aug. 2010, Istanbul, Turkey. [PDF]

  • Pulla Chandrika and C.V. Jawahar - Multi Modal Semantic Indexing for Image Retrieval Proceedings of ACM International Conference on Image and Video Retrieval(CIVR'10), pp.342-349, 5-7 July, 2010, Xi'an, China. [PDF]

  • Suman Karthik, Chandrika Pulla and C.V. Jawahar - Incremental Online Semantic Indexing for Image Retrieval in Dynamic Databases Proceedings of International Workshop on Semantic Learning Applications in Multimedia (SLAM: CVPR 2009), 20-25 June 2009, Miami, Florida, USA. [PDF]


Downloads

thesis

ppt

More Articles …

  1. Minutiae Local Structures for Fingerprint Indexing and Matching
  2. A novel approach for segmentation and registration of Echo-cardiographic Images
  3. Mining Characteristic Patterns From Visual Data
  4. Instance Retrieval and Image Auto-Annotations on Mobile Devices
  • Start
  • Prev
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. Thesis
  5. Thesis Students
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.