CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
  • Research
    • Publications
    • Journals
    • Books
    • MS Thesis
    • PhD Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Past Announcements
  • Contact Us

Enhancing Weak Biometric Authentication by Adaptation and Improved User-Discrimination


Vandana Roy

Biometric technologies are becoming the foundation of an extensive array of person identification and verification solutions. Biometrics is defined as the science of recognising a person based on certain physiological (fingerprints, face, hand-geometry) or behavioral (voice, gait, keystrokes) characteristics. Weak biometrics (hand-geometry, face, voice) are the traits which possess low discriminating content; they change over time for each individual. Thus they show lower performance as compared to the strong biometrics (eg. fingerprints, iris, retina, etc.). Due to exponentially decreasing costs of the hardware and computations, biometrics has found immense use in civilian applications (Time and Attendance Monitoring, Physical Access to Building, Human-Computer Interface, etc.) other than the forensics ones (e.g. criminal and terrorist identification). Various factors come into picture while selecting biometric traits for civilian applications, most important of which are user psychology and acceptability. Most of the weak biometric traits have little or no association with criminal history as against fingerprints (a strong biometric); data acquisition is also very simple and easy with weak biometrics. Due to these reasons, weak biometric traits are often better accented for civilian applications than the strong biometric traits. Moreover, not much research has gone into this area as compared to strong biometrics.

Due to the low discriminating content of the weak biometric traits, they result in poor performance of verification. We propose a feature selection technique called Single Class Hierarchical Discriminant Analysis (SCHDA) specifically for authentication purpose in biometric systems. The SCDHA recursively identifies the samples which overlap with the samples of the claimed identity in the discriminant space built by the single-class discriminant criterion. If samples of claimed identity are termed ``positive'' samples, and all the other samples ``negative'' samples, the single-class discriminant criterion finds an optimal transformation such that the ratio of the negative scatter with respect to positive mean over the positive within-class scatter is maximized, thereby pulling together the positive samples and pushing the negative samples away from the positive mean. Thus SCHDA results in building an optimal user-specific discriminant space for each individual where the samples of the claimed identity are well-separated from the samples of all the other users. Performance of authentication using this technique is compared with the other popular existing discriminant analysis techniques in the literature and significant improvement has been observed.

The second problem which leads to low accuracy of authentication is the poor stability of permanence of weak biometric traits due to various reasons (eg. ageing, the person gaining or losing weight, etc.). Civilian applications usually operate in cooperative or monitored mode wherein the users can give feedback to the system on occurrence of any errors. An intelligent adaptive framework is proposed which uses the feedback to incrementally update the parameters of the feature selection and verification framework for the individuals. This technique does not require the system to be re-trained to address the issue of changing features.

The third factor which has been explored to improve the performance of authentication for civilian applications is the pattern of participation of the enrolled users. As the new users are enrolled into the system, a degradation is observed in performance due to the increasing number of users. Traditionally, it is required to re-train the system periodically with the existing users to take care of this issue. An interesting observation is that although the number of users enrolled into the system can be very high, the number of users which regularly participate in the authentication process is comparatively low. Thus, modeling the variation in participating population helps to bypass the re-training process. We propose to model the variation in participating population using the Markov models. Using these models, the prior probability of participation of each individual is computed and incorporated into the traditional feature selection framework, providing more relevance to the parameters of regularly participating users. Both the structured and unstructured modes of variation of participation were explored. Experiments were conducted on varied datasets, verifying our claim that incorporating prior probability of participation helps to improve performance of a biometric system over time.

In order to validate our claims and techniques, we used hand-geometry and keystrokes-based biometric traits. The hand-images were acquired using a simple low-cost setup consisting of a digital camera and a flat translucent platform with five rigid pegs (to assure that the images acquired are well-aligned). The platform is illuminated from beneath so as to simplify the preprocessing of the acquired images. The features used for hand-geometry includes lengths of four fingers, and widths at five equidistant points on each finger. Features of thumb are not used as these measurements for thumb show high variability for the same user. This dataset was used to validate the proposed feature selection technique. For keystrokes-based biometrics, the features used were the dwell time (duration of key-press event) and flight time (duration between key-release and next key-press events)of each key, and the number of times backspace and delete key were pressed. Data was collected from subjects who were not accustomed to a particular kind of keyboard (French Keyboard). The features extracted from this dataset were time-varying and was used to validate the concept of incremental updation.

In this thesis, we identify and address some of the issues which lead to low performance of authentication using certain weak biometric traits. We also look into the problem of low performance of authentication in large-scale biometrics for civilian applications.

 

Year of completion:  2007
 Advisor : C. V. Jawahar

Related Publications

  • Vandana Roy and C. V. Jawahar - Modeling Time-Varying Population for Biometric Authentication In International Conference on computing: Theory and Applications(ICCTA), Kolkatta, 2007. [PDF]

  • Vandana Roy and C. V. Jawahar, - Hand-Geometry Based Person Authentication Using Incremental Biased Discriminant Analysis, Proceedings of the National Conference on Communication(NCC 2006), Jan 2006 Delhi, January 2006, pp 261-265. [PDF]

  • Vandana Roy and C. V. Jawahar, - Feature Selection for Hand-Geometry based Person Authentication, Proceedings of the Thirteenth International Conference on Advanced Computing and Communications (ICACCS), Coimbatore, December 2005. [PDF]


Downloads

thesis

 ppt

 

 

Vision based Robot Navigation using an On-line Visual Experience


D. Santosh Kumar (homepage)

Vision-based robot navigation has long been a fundamental goal in both robotics and computer vision research. While the problem is largely solved for robots equipped with active range-finding devices, for a variety of reasons, the task still remains challenging for robots equipped only with vision sensors. Vision is an attractive sensor as it helps in the design of economically viable systems with simpler sensor limitations. It facilitates passive sensing of the environment and provides valuable semantic information about the scene that is unavailable to other sensors. Two popular paradigms have emerged to analyze this problem, namely Model-based and Model-free algorithms. Model-based approaches demand apriori model information to be made available in advance. In case of the latter, required 3D information is computed online. Model-free navigation paradigms have gained popularity over modelbased approaches due to their simpler assumptions and wider applicability. This thesis discusses a new paradigm to vision-based navigation, namely Image-based navigation. The basic concept is that model-free paradigms involve an unnecessary intermediate depth computation, which is redundant for the purpose of navigation. Rather the motion instruction required to control the robot can be inferred directly from the acquired images. This approach is more attractive as the modeling of objects is now simply substituted by the memorization of views, which is far easier than 3D modeling.

santoshThesis

In this thesis, a new image-based navigation architecture is developed, which facilitates online learning about the world by a robot. The framework capacitates a robot to autonomously explore and navigate a variety of unknown environments, in a way that facilitates path planning and goal-oriented tasks, using visual maps that are contextually built in the process. It also facilitates the incorporation of feedback received from performing specific goal oriented tasks to update the visual representation. Based on this architecture, the design of the individual algorithms required for performing the navigation task (exploration, servoing and learning) is discussed. (more...) 

 

Year of completion: 

June 2007

 Advisor : C. V. Jawahar

Related Publications

  • D. Santohs and C.V. Jawahar - Visual Servoing in Non-Regid Environment: A Space-Time Approach Proc. of IEEE International Conference on Robotics and Automation(ICRA'07), Roma, Italy, 2007. [PDF]

  • D. Santosh Kumar and C.V. Jawahar - Visual Servoing in Presence of Non-Rigid Motion, Proc. 18th IEEE International Conference on Pattern Recognition(ICPR'06), Hong Kong, Aug 2006. [PDF]

  • D. Santosh Kumar and C.V. Jawahar - Robust Homography-Based Control for Camera Positioning in Piecewise Planar Environment , The 5th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP), Madurai, India, LNCS 4338 pp.906-918, 2006. [PDF]

  • D. Santosh and C.V. Jawahar - Cooperative CONDENSATION-based Recognition, in 8th Asian Conference on Computer Vision (ACCV) (Under Review), 2007.
  • D. Santosh and C.V. Jawahar - Visual Servoing in Non-Rigid Environments, in IEEE Transactions on Robotics (ITRO) (Under Submission), 2008.
  • D. Santosh and C.V. Jawahar - Robot Path Planning by Reinforcement Learning along with Potential Fields, in 25th IEEE International Conference on Robotics and Automation (ICRA) (Under Submission) , 2008.
  • D. Santosh, A. Supreeth and C.V. Jawahar - Mobile Robot Exploration and Navigation using a Single Camera, in 25th IEEE International Conference on Robotics and Automation (ICRA) (Under Submission) , 2008.

Downloads

thesis

 ppt

Kernel Methods and Factorization for Image and Video Analysis


Ranjeeth Kumar Dasineni (homepage)

Image and Video Analysis is one of the most active research areas in computer science with a large number of applications in security, surveillance, broadcast video processing etc. Prior to the past two decades, the primary focus in this domain was on efficient processing of image and video data. However, with the increase in computational power and advancements in Machine Learning, the focus has shifted to a wide range of other problems. Machine learning techniques have been widely used to perform higher level tasks such as recognizing faces from images, facial expression analysis in videos, printed document recognition and video understanding which require extensive analysis of data. The field of Machine Learning itself, witnessed the evolution of Kernel Methods as a principled and efficient approach to analyze nonlinear relationships in the data. The new algorithms are computationally efficient and statistically stable. This is in stark contrast with the previous methods used for nonlinear problems, such as neural networks and decision trees, which often suffered from overfitting and computational expense. In addition, kernel methods provide a natural way to treat heterogeneous data (like categorical data, graphs and sequences) under a unified framework. These advantages led to their immense popularity in many fields, such as computer vision, data mining and bioinformatics. In computer vision, the use of kernel methods such as support vector machine, kernel principal component analysis and kernel discriminant analysis resulted in remarkable improvements in performance at tasks such as classification, recognition and feature extraction. Like Kernel Methods, Factorization techniques enabled elegant solutions to many problems in computer vision such as eliminating redundancy in representation of data and analysis of their generative processes. Structure from Motion and Eigen Faces for feature extraction are examples of successful applications of factorization in vision. However, factorization, so far, has been used on the traditional matrix representation of image collection and videos. This representation fails to completely exploit the structure in 2D images as each image is represented using a single 1D vector. Tensors are more natural representations for such data and recently gained wide attention in computer vision. Factorization becomes an even more useful tool with such representations.

thesis

While both Kernel Methods and Factorization both aid in analysis of the data and detection of inherent regularities, they do so in orthogonal manner. The central idea in kernel methods is to work with new sets of features derived from the input set of features. Factorization, on the other hand, operates by eliminating redundant or irrelevant information. Thus, they form a complementary set of tools to analyze data. This thesis addresses the problem of effective manipulation of dimensionality of representation of visual data, using these tools, for solving problems in image analysis. The purpose of this thesis is three fold: i) Demonstrating useful applications of kernel methods to problems in image analysis. New kernel algorithms are developed for feature selection and time series modeling. These are used for biometric authentication using weak features, planar shape recognition and handwritten character recognition. ii) Using the tensor representation and factorization of tensors to solve challenging problems in facial video analysis. These are used to develop simple and efficient methods to perform expression transfer, expression recognition and face morphing. iii) Investigating and demonstrating the complementary nature of Kernelization and Factorization and how they can be used together for analysis of the data. (more...)

 

Year of completion:  December 2007
 Advisor :

C. V. Jawahar


Related Publications

  • Ranjeeth Kumar and C.V. Jawahar - Kernel Approach to Autoregressive Modeling Proc. of The Thirteen National Conference on Communications(NCC 2007), Kanpur, 2007 . [PDF]

  • Ranjeeth Kumar and C.V. Jawahar - Class-Specific Kernel Selection for Verification Problems Proc. of The Six International Conference on Advances in Pattern Recognition(ICAPR 2007), Kolkatta, 2007. [PDF]

  • S. Manikandan, Ranjeeth Kumar and C.V. Jawahar - Tensorial Factorization Methods for Manipulation of Face Videos, The 3rd International Conference on Visual Information Engineering (VIE), 26-28 September 2006 in Bangalore, India. [PDF]

  • Ranjeeth Kumar, S. Manikandan and C. V. Jawahar - Task Specific Factors for Video Characterization, 5th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP), Madurai, India, LNCS 4338 pp.376-387, 2006. [PDF]

  • Ranjeeth Kumar, S. Manikandan and C. V. Jawahar - Face Video Alteration Using Tensorial Methods, in Pattern Recognition, Journal of Pattern Recognition Society (Submitted)

Downloads

thesis

 ppt

 

Imaging and Depth Estimation in an Optimization Framework


Avinash Kumar (homepage)

Computer Vision is the process of obtaining information about a scene by processing the images of the scene. This reverse process can be mathematically formulated in terms of various unknown variables. Given the images, these variables will satisfy some constraints. For example the variables could be the intensity values at each pixel location in an image and the constraint being that these values have to lie between $0$ and $255$. An objective function can be formed in terms of variable and constraints. This function has the property that a set of variables which will minimize this function i.e. the global minima of this function, will be the desired solution. Often in Computer Vision, the number of possible solutions is large due to which the objective function has a number of local minima. The exhaustive search for global minima becomes computationally intensive. This thus becomes a Combinatorial Optimization problem in vision. Recently, a number of techniques based on Minimum Cut on Graphs have been proposed which are fast and efficeint. They lead to a polynomial time approximately global minima. The solutions obtained using these techniques on benchmark problems have performed better than optimization techniques developed earlier. This has led to renewed interest in the field of Optimization in computer vision in recent times. For a vision problem, obtaining a global minima using efficient optimization methods is not enough if it does not correspond to the desired solution. The formulation of an accurate objective function is also an important aspect. In this thesis, we have first proposed new objective functions for problems in Imaging and Depth estimation from images and then formulated them as optimization problems. The two main problems of imaging which have been addressed are Omnifocus Imaging and Background Subtraction. Omnifocus imaging is important as it helps to generate an image which has a large depth of field which means that everything being imaged is in focus. This is critical to many high level vision problems like object recognition which require better quality sharp images. The input to this problem is a set of images which are focussed at different depths. These images are called as multifocus images and are captured from a Non-Frontal Imaging Camera (NICAM). A new calibration technique for calibrating this camera is also proposed. This helps in registration of input images from NICAM. A Focus measure in omnifocus imaging finds the best focussed image from the set of multifocus input images. We have proposed a new Generative focus measure in this thesis. The removal of Background from images is another imaging technique where the unwanted regions in the image are removed. We have proposed a new objective function for background removal for the machine vision problem of monitoring Intermodal Freight trains. The objective function is optimized using Graph Cuts based optimization. We have also developed another techniques based on finding Edges in an image and Gaussian Mixture Modeling for background removal in such videos. In Depth estimation, the thesis has proposed range estimation from images generated from a NICAM in an optimization framework. The constraint used in this framework is that the nearby points in the three dimensional world have got the same depth. Thus the proposed optimization framework allows for accurate and smooth depth estimation in real world scenes. We show results on real data sets.

 

Year of completion:  2007
 Advisor : C. V. Jawahar

Related Publications

  • Avinash Kumar, Narendra Ahuja, John M. Hart, U.K.Visesh, P.J. Narayanan and C.V. Jawahar - A Vision System for Monitoring Intermodal Freight Trains Proc. of IEEE Workshop on Applications of Computer Vision(WACV 2007), Austin, Texas, USA, 2007. [PDF]

  • Y.C. Lal, C.P.I. Barkan, J. Drapa, N. Ahuja, J.M. Hart, P. J. Narayanan, C. V. Jawahar, A. Kmar, L.R. Milhon and M. Stehly - Machine vision analysis of the energy efficiency of intermodal freight trains Proceeding of the Institution Mechanical Engineers, Part F: Journal of Rail and Rapid Transit, ISSN 09540-4097, Volume 221, Number 3/2007, Pages 353-364, 2007. [PDF]


Downloads

thesis

 ppt

Automatic Writer Identification and Verification using Online Handwriting


 Sachin Gupta (homepage)

Automatic person identification is one of the major concerns in this era of automation. However, this is not a new problem and our society has adopted several different ways to authenticate the identity of a person such as signature and possessing a document. With the advent of electronic communication media (Internet), the interactions are becoming more and more automatic and thus the problem of identity theft has became even more severe. Even, the traditional modes of person authentication systems such as Possessions and Knowledge are not able to solve this problem. Possessions include physical possessions such as keys, passports, and smart cards. Knowledge is a piece of information that is memorized, such as a password and is supposed to be kept a secret. Knowledge and possession based methods are more focused on "what you know" or "what you possess" rather than "who you are". Due to inability of knowledge and possession based authentication methods to handle the security concerns, biometrics research have gained significant momentum in the last decade as the security concerns are increasing due to increasing automation of every field. Biometrics refers to authentication of a person using a physiological and behavioral trait of the individual that distinguish him from others. Biometric authentication has various advantages over knowledge and possession based identification methods including ease of use and non repudiation. In this thesis, we address the problem of handwriting biometrics. Handwriting is a behavioral biometric as it is generated as the consequence of an action performed by a person. Handwriting identification also has a long history. Signature (a specific instance of handwriting) has been used for authentication of legal documents for a long time.

This thesis addresses the various problems related to automatic handwriting identification. Most of the writer identification work is being done manually till date as a lot of context dependent information, such as, source of documents, nature of handwriting, etc. is difficult to model mathematically. However, they can be easily analyzed by human experts. Still, an automatic handwriting analysis system is useful as it can remove subjectivity from the process of handwriting identification and can be used for expert advice in various court cases. The final aim of this research is to design efficient algorithms for automatic feature extraction and recognition of the writer from a given handwritten document with as less human intervention as possible.

Specifically, we propose efficient solutions to three different applications of handwriting identification. First we look at the problem of determining the authorship of an arbitrary piece of online handwritten text. We then analyze the discriminative information from online handwriting to propose an efficient and accurate approach for text-dependent writer verification for practical and low security applications. We also look at the problem of repudiation in handwritten documents for forensic document examination. After introducing the problem of repudiation in handwritten documents, we propose an algorithm for repudiation detection in the handwritten documents. Handwriting identification is quite different from handwriting recognition; the other popular sub-field of automatic handwriting analysis. Handwriting recognition tries to identify the content of a handwritten text and tries to minimize variations due to writing style. On the other hand, in the case of handwriting identification, variations due to style is sought out.

 

Year of completion:  February 2008
 Advisor : Anoop M. Namboodiri

Related Publications

  • Sachin Gupta and Anoop M. Namboodiri  - Text-Dependent Writer Verification Using Boosting Proceedings of International Conference of Frontiers in Handwriting Recognition, Montreal, Canada, 2008 [PDF]

  • Sachin Gupta and Anoop M. Namboodiri - Repudiation Detection in Handwritten Documents Proc of The 2nd International Conference on Biometics (ICB'07), PP. 356-365 Seoul, Korea, 27-29 August, 2007. [PDF]

  • Anoop M. Namboodiri and Sachin Gupta - Text Independent Writer Identification from Online Handwriting, International Workshop on Frontiers in Handwriting Recognition(IWFHR'06), October 23-26, 2006, La Baule, Centre de Congreee Atlantia, France. [PDF]


Downloads

thesis

 ppt

 

More Articles …

  1. Word Hashing for Efficient Search In Document Image Collections
  2. Proxy Based Compression of Depth Movies
  3. Projected Texture for 3D Object Recognition
  4. Real Time Rendering of Implicit Surfaces on the GPU
  • Start
  • Prev
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • Next
  • End
  1. You are here:  
  2. Home
  3. Research
  4. MS Thesis
  5. Thesis Students
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.