CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Document Image Retrieval using Bag of Visual Words Model.


Ravi Shekhar (homepage)

With the development of computer technology, storage has become more affordable. So, all traditional libraries are making their collections electronically available on internet or digital media. To search in these collections, better indexing structure is needed. It can be done manually or automatically. But manual indexing is time consuming and expensive. So, automation is the only feasible option. First approach in this direction was to convert document images into text by OCR and then perform traditional search. This method works well for European languages whose optical character recognizers (OCRs) are robust but did not work reliably for Indian and non-European languages like Hindi, Telugu and Malayalam etc. Even for European languages, OCRs are not available in the case of highly degraded document images.

To overcome limitations of OCR, word spotting was emerged as an alternative. In word spotting, given word image is represented as features and matching is done based on feature distance. Main challenges in word spotting are extraction of proper features and feature comparison mechanism. Profile feature is one of the popular ways to represent word image and Euclidean distance is used for comparison. To overcome word length, all images are scaled to same size, due to which lots of information is lost. Later, DTW based approach was proposed for matching which does not require scaling of images. The problem with DTW based approach is that it is very slow and can not be used for practical and large scale application.

In first part of this thesis, we explain a Bag of Visual Words (BoVW) based approach to retrieve similar word images from a large database, efficiently and accurately. We show that a text retrieval system can be adapted to build a word image retrieval solution. This helps in achieving scalability. We demonstrate the method on more than 1 Million word images with a sub-second retrieval time. We validate the method on four Indian languages, and report a mean average precision of more than 0.75. We represent the word image as histogram of visual words present in the image. Visual words are quantized representation of local regions, and for this work, SIFT descriptors at interest points are used as feature vectors. To address the lack of spatial structure in the BoVW representation, we re-rank the retrieved list. %This significantly improves the performance. This provides significant improvement in the performance.

Later, we have also performed enhancements over BoVW approach. Enhancements are in terms of query expansion, text query support and Locality constrained Linear Coding (LLC) based retrieval system. In query expansion, we have used initial results to modify our initial query and obtained better results. In BoVW model, query is given by example but users are generally interested in query as text like ``Google'', ``Bing'' etc. Text query is also supported by same model as BoVW. Later, LLC is used to achieve high recall. LLC scheme learns a data representation using nearest codeword and achieves improvement over performance.

In most of the scalable document search, features like SIFT, SURF are used. These are originally designed for natural images. Natural images have lots of variation and extra information in terms of gray images. Document images are binary images, so need features which can be specifically designed for them. We have proposed patch based features using profile features. We have compared our proposed feature with SIFT and obtained similar performance. Our feature has advantage that it is faster to compute compared to SIFT, which makes our pre-processing step very fast.

We have demonstrated that recognition free approach is feasible for large scale word image retrieval. In future work, similar approach can be used for hand written, camera based retrieval and natural scene text retrieval etc. Also, a better and efficient descriptor is needed specifically designed for document words.

 

Year of completion:  June 2013
 Advisor : C. V. Jawahar

Related Publications


Downloads

thesis

ppt

Learning Representations for Computer Vision Tasks.


Siddhartha Chandra (homepage)

Learning representations for computer vision tasks has been the holy grail for the vision community for long. Research in computer vision is dedicated towards developing machines that understand image data, which takes a variety of forms such as images, video sequences, views from multiple cameras, high dimensional data from medical scanners and so on. Good representations of the data intend to discover the hidden structure in it; better insights into the nature of the data can help choose or create better features, learn better similarity measures between data points, build better predictive and descriptive models, and ultimately drive better decisions from data. Research into this field has shown that good representations are more often than not task specific: there is no single universal set of features that solves all the problems in computer vision. Consequently, feature learning for computer vision tasks is not a problem, rather a set of problems, a full fledged field of research per se.

In this thesis, we seek to learn good, semantically meaningful representations for some of the popular computer vision tasks, such as visual classification, action recognition and so on. We study and employ a variety of existing feature learning approaches, and devise novel strategies for learning representations on these tasks. Additionally we compare our methods with the traditional approaches. We discuss the design choices we make, and the effects of varying the parametric variables in our approaches. We provide empirical evidence to show our representations are better at solving the tasks at hand than the traditional ones.

To solve the task of action recognition, we devise a novel PLS kernel that employs Partial Least Squares (PLS) regression to derive a scalar measure of similarity between two video sequences. We use this similarity kernel to solve the tasks of hand gesture recognition and action classification. We demonstrate that our approach significantly outperforms the state of the art approaches on two popular datasets: Cambridge hand gesture dataset and UCF sports action dataset.

We use a variety of approaches to tackle the popular task of visual classification. We describe a novel hierarchical feature learning strategy that uses low level Bag of Words visual words to create ``higher level" features by making use of the spatial context in images. Our model uses a novel {\em Naive Bayes Clustering} algorithm to convert a 2-D symbolic image at one level to a 2-D symbolic image at the next level with richer features. On two popular datasets, Pascal VOC 2007 and Caltech 101, we demonstrate the superiority of our representations to the traditional BoW and deep learning representations.

Driven by the hypothesis that most data, such as images, lies in multiple non-linear manifolds, we propose a novel non-linear subspace clustering framework that uses $K$ Restricted Boltzmann Machines ({\sc K-RBMs}) to learn non-linear manifolds in the raw image space. We solve the coupled problem of finding the right non-linear manifolds in the input space and associating image patches with those manifolds in an iterative Expection Maximization (EM) like algorithm to minimize the overall reconstruction error. Our clustering framework is comparable to the state of the art clustering approaches on a variety of synthetic and real datasets. We further employ K-RBMs for feature learning from raw images. Extensive empirical results over several popular image classification datasets show that such a framework outperforms the traditional feature representations such as the SIFT based Bag-of-Words (BoW) and convolutional deep belief networks.

This thesis is an account of our efforts to do our bit to contribute to this fascinating field. We admit that research in this field will continue for a long time, for solving computer vision is still a distant dream. We hope that we have earned the right to say one day, in retrospect, that we were on the right track.

(more...)

 

Year of completion:  March 2013
 Advisor : C. V. Jawahar

Related Publications


Downloads

thesis

ppt

A Study of X-Ray Image Perception for Pneumoconiosis Detection


Varun Jampani

Pneumoconiosis is an occupational lung disease caused by the inhalation of industrial dust. Despite the increasing safety measures and better work place environments, pneumoconiosis is deemed to be the most common occupational disease in the developing countries like India and China. Screening and assessment of this disease is done through radiological observation of chest x-rays. Several studies have shown the significant inter and intra reader observer variation in the diagnosis of this disease, showing the complexity of the task and importance of the expertise in diagnosis.

The present study is aimed at understanding the perceptual and cognitive factors affecting the reading of chest x-rays of pneumoconiosis patients. Understanding these factors helps in developing better image acquisition systems, better training regimen for radiologists and development of better computer aided diagnostic (CAD) systems. We used an eye tracking experiment to study the various factors affecting the assessment of this diffused lung disease. Specifically, we aimed at understanding the role of expertize, contralateral symmetric (CS) information present in chest x-rays on the diagnosis and the eye movements of the observers. We also studied the inter and intra observer fixation consistency along with the role of anatomical and bottom up saliency features in attracting the gaze of observers of different expertize levels, to get better insights into the effect of bottom up and top down visual saliency on the eye movements of observers.

The experiment is conducted in a room dedicated to eye tracking experiments. Participants consisting of novices (3), medical students (12), residents (4) and staff radiologists (4) were presented with good quality PA chest X-rays, and were asked to give profusion ratings for each of the 6 lung zones. Image set consisting of 17 normal full chest x-rays and 16 single lung images are shown to the participants in random order. Time of the diagnosis and the eye movements are also recorded using a remote head free eye tracker.

chest heatmap chest original

Results indicated that Expertise and CS play important roles in the diagnosis of pneumoconiosis. Novices and medical students are slow and inefficient whereas, residents and staff are quick and efficient. A key finding of our study is that the presence of CS information alone does not help improve diagnosis as much as learning how to use the information. This learning appears to be gained from focused training and years of experience. Hence, good training for radiologists and careful observation of each lung zone may improve the quality of diagnostic results. For residents, the eye scanning strategies play an important role in using the CS information present in chest radiographs; however, in staff radiologists, peripheral vision or higher-level cognitive processes seems to play role in using the CS information.

There is a reasonably good inter and intra observer fixation consistency suggesting the use of similar viewing strategies. Experience is helping the observers to develop new visual strategies based on the image content so that they can quickly and efficiently assess the disease level. First few fixations seem to be playing an important role in choosing the visual strategy, appropriate for the given image.

Both inter-rib and rib regions are given equal importance by the observers. Despite reading of chest x-rays being highly task dependent, bottom up saliency is shown to have played an important role in attracting the fixations of the observers. This role of bottom up saliency seems to be more in lower expertize groups compared to that of higher expertize groups. Both bottom up and top down influence of visual fixations seems to change with time. The relative role of top down and bottom up influences of visual attention is still not completely understood and it remains the part of future work.

Based on our experimental results, we have developed an extended saliency model by combining the bottom up saliency and the saliency of lung regions in a chest x-ray. This new saliency model performed significantly better than bottom-up saliency in predicting the gaze of the observers in our experiment. Even though, the model is a simple combination of bottom-up saliency maps and segmented lung masks, this demonstrates that even basic models using simple image features can predict the fixations of the observers to a good accuracy.

Experimental analysis suggested that the factors affecting the reading of chest x-rays of pneumoconiosis are complex and varied. A good understanding of these factors definitely helps in the development of better radiological screening of pneumoconiosis through improved training and also through the use of improved CAD tools. The presented work is an attempt to get insights into what these factors are and how they modify the behavior of the observers.

 

Year of completion:  January 2013
 Advisor : Jayanthi Sivaswamy

Related Publications


Downloads

thesis

ppt

Robust Motion Estimation and Analysis based on Statistical Information.


V S Rao Veeravasarapu (homepage)

OF

Accurate and Robust estimation of optical flow continues to be of interest due to the deep penetration of digital cameras into many areas including robot navigation and video surveillance applications. The canonical approach to the flow estimation relies on local brightness constancy which has limitations. In this thesis, we re-examine the optic flow problem and formulate an alternate hypothesis that optical flow is an apparent motion of local information across frames and propose a novel framework to robustly estimate flow parameters. Pixel-level matching approach has been implemented according to the proposed formulation in which optical flow is estimated based on local information associated with each pixel. Self information and a variety of divergence measures have been investigated for capturing the local information. Results of benchmarking with the Middlebury dataset show that the proposed formulation is comparable to the top performing methods in accurate flow computation. The distinguishing aspects however are that these results hold for small as well as large displacements and the flow estimation is robust to distortions such as noise, illumination changes, non-uniform blur etc. Thus, the local information based approach offers a promising alternative to computing optical flow. We also developed a method to remove motion blur from frames by using the information measures. The effectiveness of the proposed motion estimation approach is also demonstrated on extraction of structure from motion of synthetic micro-texture patterns, cardiac ultrasound sequences and colorization of black and white videos. (more...)

 

Year of completion:  March 2013
 Advisor : Jayanthi Sivaswamy

Related Publications


Downloads

thesis

ppt

Registration of Retinal Images.


Yogesh Babu Bathina

Ever so often a need arises in clinical scenarios, for integrating information from multiple images or modalities for the purposes of diagnosis and pathology tracking. Registration, the most fundamental step in such an integration, is the task of spatially aligning a pair of images of the same scene acquiredfrom different sources, viewpoints and time. This thesis concerns the task of registration specific to three most popular retinal imaging modalities namely Color Fundus Imaging (CFI), Red-Free Imaging (RFI) and Fluoroscein Fundus Angiography (FFA). CFI is obtained under white light which enables the experts to examine the overall condition of the retina in full color. In RFI, the illuminating light is fil-tered to remove red color which improves the contrast between vessel and other structures. FFA is a set of time sequence images acquired under infrared light after a fluorescent dye is injected intravenously into the blood stream. This provides high contrast vessel information revealing blood flow dynamics,leaks and blockages.

Retina is a part of the central nervous system (CNS) which is composed of many different types of tissues. Given this distinctive feature, a wide variety of diseases affecting different body systems uniquely affect the retina. These Systemic diseases include Diabetes, Hypertension, Atherosclerosis , Sickle cell disease, Multiple sclerosis to name a few. Recent advancements reveal a close association of retinal vascular signs to cerebrovascular, cardiovascular and metabolic outcomes. Simply put, the health of blood vessels in the eye often indicates the condition of the blood vessels (arteries and veins) throughout the body.

Registration of multimodal retinal images aids in the diagnosis of various kinds of retinal diseases like Glaucoma, Diabetic Retinopathy, Age Related Macular degeneration etc. Single modality images acquired over a period of time are used for pathology tracking. Registration is also used for constructing a mosaic image of the entire retina from several narrow field images, which aids comprehensive retinalexamination. Another key application area for registration is surgery, both in the planning stage and dur-ing surgery for which only optical range information is available. Fusion of these modalities also helps increase the anatomical range of visual inspection, early detection of potentially serious pathologies andassess the relationship between blood flow and the diseases occurring on the surface of the retina.

The task of registering retinal images is challenging given the wide range of pathologies captured via different modalities in different ways, geometric and photometric variation, illumination artifacts, noise and other degradations. Many successful methods have been proposed in the past for the registering retinal images. A review of these methods show good performance over healthy retinal images. How-ever, the scope of handling a wide range of pathologies is limited for most of the approaches. Further, these methods fail to register poor quality images, especially in the multimodal case. In this work, we propose a feature based retinal image registration algorithm capable of handling such challenging image pairs.

At the core of this algorithm is a novel landmark detector and descriptor scheme. A set of landmarks are detected on the topographic surface of retina using Curvature dispersion measure. The descriptor is based on local projections using radon transform which characterizes local structures in an abstract sense rendering it less sensitive to pathologies and noise. Drawing essence from the recent developments in robust estimation methods, a modified MSAC(M-estimators Sample and Consensus) is proposed for false correspondence pruning. On the whole, the minor contributions at each stage of feature based reg-istration scheme presented here are of significance. We evaluate our method against two recent schemes on three different datasets which includes both monomodal and multimodal images. The results show that our method gives better accuracy for poor quality and pathology affected images while performing on par with the existing methods on normal images.

 

Year of completion:  April 2013
 Advisor : Jayanthi Sivaswamy

Related Publications


Downloads

thesis

ppt

More Articles …

  1. Detection and Segmentation of Stroke Lesions from Diffusion Weighter MRI Data of the Brain
  2. Techniques for Organization and Visualization of Community Photo Collections
  3. Patient-Motion Analysis in Perfusion Weighted MRI.
  4. Bag of Words and Bag of Parts models for Scene Classification in Images.
  • Start
  • Prev
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. Thesis
  5. Thesis Students
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.