Learning Appearance Models


Our reseach focuses on learning appearance models from images/videos that can be used for a variety of tasks such as recognition, detection and classification etc. Prior information such as geometry and kinematics is used to improve the quality of appearance models learnt thus enabling better performance at these tasks.

dynamic activities

Dynamic Activity Recognition

Many of the human activities such as Jumping, Squatting have a correlated spatiotemporal structure. They are composed of homogeneous units. These units, which we refer to as actions, are often common to more than one activity. Therefore, it is essential to have a representation which can capture these activities effectively. To develop this, we model the frames of activities as a mixture model of actions and employ a probabilistic approach to learn their low-dimensional representation. We present recognition results on seven activities performed by various individuals. The results demonstrate the versatility and the ability of the model to capture the ensemble of human activities.

eigen spaces

Boosting Appearance Models using Geometry

We developed novel method to construct an eigen space representation from limited number of views, which is equivalent to the one typically obtained from large number of images. This procedure implicitly incorporates a novel view synthesis algorithm in the eigen space construction process. Inherent information in an appearance representation is enhanced using geometric computations. We experimentally verify the performance for orthographic, affine and projective camera models. Recognition results on the COIL and SOIL image database are promising.

Face Video Manipulation using Tensorial Fatorization

expression transfer

We use Tensor Factorization for manipulating videos of human faces. Decomposition of a video represented as a tensor into non-negative rank-1 factors results in sparse and separable factors equivalent to a local parts decomposition of the object in the video. Such a decomposition can be used for tasks like expression transfer and face morphing. For instance, given a facial expression video it can be represented as a tensor which can then be factorized. The factors that best represent the expression can be identied which can then be transfered to another face video thus transferring the expression. A good solution to the problem of expression transfer would require explicit modeling of the expression and its interaction with the underlying face content. Instead the method proposed here is purely appearance based and the results demonstrate that the proposed method is a simple alternative to the popular complex solution.

Related Publication

  • S. Manikandan, Ranjeeth Kumar and C.V. Jawahar - Tensorial Factorization Methods for Manipulation of Face Videos, The 3rd International Conference on Visual Information Engineering 26-28 September 2006 in Bangalore, India. [PDF]

  • Ranjeeth Kumar, S. Manikandan and C. V. Jawahar - Task Specific Factors for Video Characterization, 5th Indian Conference on Computer Vision, Graphics and Image Processing, Madurai, India, LNCS 4338 pp.376-387, 2006. [PDF]

  • Paresh K. Jain, Kartik Rao P. and C. V. Jawahar - Computing Eigen Space from Limited Number of Views for Recognition, 5th Indian Conference on Computer Vision, Graphics and Image Processing, Madurai, India, LNCS 4338 pp.662-673, 2006. [PDF]

  • S. S. Ravi Kiran, Karteek Alahari and C. V. Jawahar, Recognizing Human Activities from Constituent Actions, Proceedings of the National Conference on Communications (NCC), Jan. 2005, Kharagpur, India, pp. 351-355. [PDF]



Associated People

Biological Vision


The perceptual mechanisms used by different organisms to negotiate the visual world are fascinatingly diverse. Even if we consider only the sensory organs of vertebrates, such as eye, there is much variety. Several disciplines have approached the problem of investigating how sensory, motors and central visual systems function and are oganised. Area of biological vision aims to build a computational understanding of various brian mechanisms. Synergy between biological and computer vision research can be found in low-level vision. Substantial insights about the processes for extracting colour, edge, motion and spatial frequency information from images have come from combining computational and neuro-physiological constraints. Understanding of human perception/vision is said to be an early step towards indetifying objects and understanding of scene.

Work Undertaken


:: Towards Understanding Texture Processing :: 

A fundamental goal of texture research is to develop automated computational methods for retrieving visual information and understanding image content based on textural properties in images. A synergy between biological and computer vision research in low-level vision can give substantial insights about the processes for extracting color, edge, motion, and spatial frequency information from images. In this thesis, we seek to understand the texture processing that takes place in low level human vision in order to develop new and effective methods for texture analysis in computer vision. The different representations formed by the early stages of HVS and visual computations carried out by them to handle various texture patterns is of interest. Such information is needed to identify the mechanisms that can be use in texture analysis tasks. (more detail...)

:: Biologically Inspired Interest Point Operator ::
poich1Interest point operators (IPO) are used extensively for reducing computational time and improving the accuracy of several complex vision tasks such as object recognition or scene analysis. SURF, SIFT, Harris,Corner points etc., are popular examples. Though there exists a large number of IPOs in the vision literature, most of them rely on low level features such as color,edge orientation etc., making them sensitive to degradation in the images.

Human vision systems (HVS) perform these tasks with seemingly little effort and are robust to such degradation by employing spatial attention mechanisms to reduce the computational burden. Extensive studies of these spatial attention mechanisms have led to several computational models (eg. Itti, Koch). However, very few models have found successful applications in computer vision related tasks partly owing to their prohibitive

Computational attention systems either have used top-down or bottom-up information. Using both types of information is an attractive choice for top-down knowledge is quite helpful particularly when images are degraded [Antonio Torralba]. Our work is focused on developing a robust biologically-inspired IPO capable of utilizing top-down knowledge. The operator will be tested as a feature detector /descriptor for a Monocular Visual SLAM.

Antonio Torralba, Contextual Priming for Object Detection, IJCV,Vol.53, 2,2003,Pages:169--191.
Laurent Itti, Christof Koch, A saliency based search mechanism for overt and covert shifts of visual attention,Vision Research,Vol.40,2000,Pages: 1489--1506

Current under-going Projects

  • Medical Image Reconstruction on Hexagonal Grid
  • Computational Understanding of Medical Image Interpretation by Expert

Related Publications

  • N.V. Kartheek Medathati, Jayanthi Sivaswamy - Local Descriptor based on Texture of Projections Proceedings of Seventh Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP'10),12-15 Dec. 2010,Chennai, India. [PDF]

  • Joshi Datt Joshi, Saurabh Garg and Jayanthi Sivaswamy - Script Identification from Indian Documents, Proceedings of IAPR Workshop on Document Analysis Systems (DAS 2006), Nelson, pp.255-267. [PDF]

  • Gopal Datt Joshi, Saurabh Garg and Jayanthi Sivaswamy - A Generalised Framework for Script Identification Proc. of International Journal for Document Analysis and Recognition(IJDAR), 10(2), pp.55-68, 2007. [PDF]

  • Gopal Datt Joshi and Jayanthi Sivaswamy - A Computational Model for Boundary Detection, 5th Indian Conference on Computer Vision, Graphics and Image Processing, Madurai, India, LNCS 4338 pp.172-183, 2006. [PDF]

  • Gopal Datt Joshi, and Jayanthi Sivaswamy - A Simple Scheme for Contour Detection, Proceedings of International Conference on Computer Vision and Applications (VISAP 2006), Setubal. [PDF]

  • L.Middleton and J. Sivaswamy, Hexagonal Image Processing, Springer Verlag, London, 2005, ISBN: 1-85233-914-4. [PDF]

  • Gopal Datt Joshi , and Jayanthi Sivaswamy - A Multiscale Approach to Contour Detection, Proceedings of International Conference on Cognition and Recognition ,pp. 183-193, Mysore, 2005. [PDF]

  • L. Middleton and J. Sivaswamy - A Framework for Practical Hexagonal-Image Processing, Journal of Electronic Imaging, Vol. 11, No. 1, January 2002, pp. 104--114. [PDF]


Associated People

The Garuda: A Scalable, Geometry Managed Display Wall


Cluster-based tiled display walls simultaneously provide high resolution and large display area (Focus + Context) and are suitable for many applications. They are also cost-effective and scalable with low incremental costs. Garuda is a client-server based display wall solution designed to use off-the-shelf graphics hardware and standard Ethernet network. Garuda uses an object-based scene structure represented using a scene graph. The server determines the objects visible to each display-tile using a novel adaptive algorithm that culls an object hierarchy to a frustum hierarchy. Required parts of the scenegraph are transmitted to the clients which cache them to exploit inter-frame redundancy in the scene. A multicast-based protocol is used to transmit the geometry exploiting the spatial redundancy present especially on large tiled displays. A geometry push philosophy from the server helps keep the clients in sync with one another. No node including the server needs to render the entire environment, making the system suitable for interactive rendering of massive models. Garuda is built on the Open scene graph system and can transparently render any OSG-based application to a tiled display wall without any modification by the user.


The Garuda System provides ::full pp

  • Cluster based large display solution: Low in cost, easier to maintain and scale than Monolithic solutions.
  • Focus and Context: Renders large scenes with high detail, user is able to see the entirety of the scene along with fine details.
  • Driven from Graphics, not images: Interactive applications. Not only increase in size but also in resolution.
  • Parallel Rendering: Distributed rendering capabilities of a cluster used for parallel rendering. No individual system has to render the entire scene.
  • Transparent Rendering for OSG: More applicability, any Open Scene Graph application can be rendered to the system without any modification.
  • Capability to handle dynamic scenes: The system can handle dynamic OSG environment rendering at interactive frame rates.
  • Low Network Load: The system has very low network requirements owing to a Server push philosophy and caching at clients.

fatehFeatures of The Garuda System ::

  • Scalable to large tile configurations, up to 7x7 tile sizes on single server. A hierarchy of servers can be used to support even larger tile sizes.
  • Caching at the clients and use of multicast keeps the network requirements low, the system can handle huge tile configurations on a single 100Mbps network.
  • Using a novel culling algorithm the system scales sub-linearly to arbitrary large tile configurations. Please see: Adaptive Culling Algorithm for details.
  • No recompilation or relinking of OSG code necessary for rendering to a display wall.
  • Using distributed rendering the system can render massive models which could not be rendered at interactive frame rates using a single machine.



Related Publications


  • Nirnimesh, Pawan Harish and P. J. Narayanan - Garuda: A Scalable, Tiled Display Wall Using Commodity PCs Proc. of IEEE Transactions on Visualization and Computer Graphics(TVCG), Vol.13, no.~5, pp.864-877, 2007. [PDF]

  • Nirnimesh, Pawan Harish and P. J. Narayanan - Culling an Object Hierarchy to a Frustum Hierarchy, 5th Indian Conference on Computer Vision, Graphics and Image Processing, Madurai, India, LNCL 4338 pp.252-263,2006. [PDF]

Associated People

Depth-Image Representations


Depth Images are viable representations that can be computed from the real world using cameras and/or other scanning devices. The depth map provides a 2 and a half D structure of the scene. The depth map gives a visibility-limited model of the scene and can be rendered easily using graphics techniques. A set of Depth Images can provide hole-free rendering of the scene. Multiple views need to be blended to provide smooth hole-free rendering. Such a representation of the scene is bulky and needs good algorithms for real-time rendering and efficient representation. A GPU-based algorithm can render large models represented using DIs in real time.

The image representation of the depth map may not lend itself nicely to standard image compression techniques, which are psychovisually motivated. The scene representation using multiple Depth Images contains redundant descriptions of common parts and can be compressed together. Compressing these depth maps using standard techniques such as LZW and JPEG and comparing the quality of rendered novel views by varying the quality factors of JPEG can give us a good trade-off analysis between the quality and compression ratio. Multiview compression of texture images can be performed by exploiting the constraints between views such as disparity, epipolar constraint,multilinear tensors, etc.

GPU Rendering & DI Compression


We aim at rendering big,complex scenes efficiently from their Depth maps. Some of the features of the system are ::

  • The novel view point is not restricted and can be anywhere in the seen, unlike view morphing.
  • The visibility-limited aspect of the representation provides several locality properties. A new view will be affected only by depths and textures in its vicinity.
  • Use of multiple depth images to fill in the hole regions created due to the lack of complete information in a single depth map.
  • Only Valid views according to the thresholding angle are processed for the rendering, there by reducing the computation time.
  • GPU algorithm gives a multiple times better FPS than the CPU algorithm.
  • Frame buffer objects and Vertex Buffer objects improve on the performance and memory management of the rendering.
  • Resolution can be changed by the subsampling the grid and thus reducing the number of primitves to be drawn.

DIR2The scene representation using multiple Depth Images contains redundant descriptions of common parts. Our Compression methods aim at exploiting this redundancy for a compact representation. The various kinds of compression algorithms tried are ::

  • LZW Compression (lossless technique) is applied on depth maps using gzip
  • JPEG Compression :: Depth maps are compressed with various quality factors.
  • Quad Tree Based Compression :: If the block of any image/depth map represents one particular value it is stored as one single node in the tree.
  • MPEG compression :: All the frames are used to generate a movie sequence to get the encoded image.
  • Geometry Proxy Model :: It is an approximate description of the scene used to model the common, position independent,scene structure.
  • Progressive Compression :: Differences are added bit by bit progressivly this allows for smoother levels of detail.
  • Quality Levels :: Levels of Details (LODs) are varied to control the rendering time through the number of primitives or size of the model and texture.

Related Publication

  • Pooja Verlani, Aditi Goswami, P.J. Narayanan, Shekhar Dwivedi and Sashi Kumar Penta - Depth Image: representations and Real-time Rendering, Third International Symposium on 3D Data Processing, Visualization and Transmission,North Carolina, Chappel Hill, June 14-16, 2006. [PDF]

  • Sashi Kumar Penta and P. J. Narayanan, - Compression of Multiple Depth-Maps for IBR, The Visual Computer, International Journal of Computer Graphics, Vol. 21, No.8-10, September 2005, pp. 611--618. [PDF]

  • P. J. Narayanan, Sashi Kumar P and Sireesh Reddy K, Depth+Texture Representation for Image Based Rendering, Proceedings of the Indian Conference on Vision, Graphics and Image Processing(ICVGIP), Dec. 2004, Calcutta, India, pp. 113--118. [PDF]

Associated People

  • Pooja Verlani
  • Aditi Goswami
  • Naveen Kumar
  • Saurabh Aggrawal
  • Shekhar Dwivedi
  • Sireesh Reddy K
  • Sashi Kumar Penta
  • Prof. P. J. Narayanan

Handwriting Analysis


The work in handwriting analysis at CVIT concentrates on Recognition, Synthesis, Annotation, Search, and Classification of handwritten data. We primarily concentrate on online handwriting, where the temporal information of the writing process is available in the handwritten data, although many of the approaches we use are extensible to offline handwriting as well. Specifically, recognition of online handwriting in Indian languages has special significance, as it can form an effective mechanism of data input, as opposed to keyboards that needs multiple keystrokes and control sequences to input many characters.

Handwriting Synthesishyd

Handwriting synthesis is the problem of generating data close to how a human would write the text. The characterstics of the generated data could be that of specific writer or that from a generic model. Systhesis of handwriting pose a challange as writer spcecific features need to captured and preserved, yet at the same time, variability between handwriting should also be taken into account. Even with the given model, synthesis should not be deterministic since the variation that are found in human handwriting are stochastic.

Application of handwriting synthesis, includes, automatically creation of personalized handwritten documents, large amount of annotated handwritten data for training of recognition engines and writer independent matching and retrieval of handwritten documents.

For synthesis of Indic scripts, we model the handwriting at two levels. A stroke level model is used to capture the writing style and hand movements of the individual strokes. A space-time layout model is then used to arrange the synthesized strokes to form the words. Both the stroke model and the layout model can be learned from examples, and the method can learn from a single example, as well as a large collection to capture the variations. The model also allows us to synthesize the words in multiple Indic scripts through transliteration.

Annotation and Search of Handwritten dataimg

Annotation of handwriting is the process of labeling input data for training for the purpose of a variety of handwriting analysis problems like Handwriting recognition and writer identification systems. However manual annotation of large datasets is tedious, expensive, and error prone process, especially at character and stroke level. Lack of proper linguistic resources in form of annotated data sets is a major hurdle in building recognizers for them.

In many practical situations, plain transcripts of handwritten data is available, which can be used to make the process of annotation, easier. Data collection process can be carried out in different setting like, unrestricted data, designed text, dictation, and data generation (using handwriting synthesis). A parallel text is available in all the above case, except that of unrestricted data. For annotation, we use the model based handwriting synthesis unit described above to map the text corpora to handwriting space and annotation is propagated to word and character levels using elastic matching of handwriting. Stroke level annotation for online handwriting recognition is currently being done using semi-automatic tools.

Online Handwriting Recognition


We aim at building robust and accurate recognition engine for Indian languages, specifically for Hindi, Telugu and Malayalam. There are some features for Indian Languages and their writing, which necessitates a different approach for recognition as compared to English:

  • The primary unit of words is an akshara, which is a combination of multiple consonants, and ending in a vowel.
  • Each language has a very large set of aksharas - usually multiple thousands
  • Each akshara is composed of a single or multiple strokes, and no partial strokes.

A robust and accurate recognition system poses a variety of research challenges, especially for Indian languages. We concentrate on a variety of problems such as building large-class hierarchical classifiers, specifically for handwriting recognition and OCR, Discriminating classifiers for differenting similar looking time-series data (strokes), Compact representation of class models, Efficient spell checkers for languages with large number of work form variations, etc.

Writer Identification

Writer Identification is a process of identifying the authorship of handwritten documents. Relevance of document in civil and criminal litigations, primarily dependent on our ability to assign authorship to the particular document. For more information on writer identification click here.

Related Publications

  • Anoop M. Namboodiri and Sachin Gupta - Text Independent Writer Identification from Online Handwriting, International Workshop on Frontiers in Handwriting Recognition(IWFHR'06), October 23-26, 2006, La Baule, Centre de Congreee Atlantia, France. [PDF]

  • Anand Kumar, A. Balasubramanian, Anoop M. Namboodiri and C.V. Jawahar - Model-Based Annotation of Online Handwritten Datasets, International Workshop on Frontiers in Handwriting Recognition(IWFHR'06), October 23-26, 2006, La Baule, Centre de Congreee Atlantia, France. [PDF]

  • Karteek Alahari, Satya Lahari Putrevu and C.V. Jawahar - Learning Mixtures of Offline and Online Features for Handwritten Stroke Recognition, Proc. 18th IEEE International Conference on Pattern Recognition(ICPR'06), Hong Kong, Aug 2006, Vol. III, pp.379-382. [PDF]

  • C. V. Jawahar and A. Balasubramanian - Synthesis of Online Handwriting in Indian Languages, International Workshop on Frontiers in Handwriting Recognition(IWFHR'06), October 23-26, 2006, La Baule, Centre de Congree Atlantia, France. [PDF]

  • Karteek Alahari, Satya Lahari P and C. V. Jawahar - Discriminant Substrokes for Online Handwriting Recognition, Proceedings of Eighth International Conference on Document Analysis and Recognition(ICDAR), Seoul, Korea 2005, Vol 1, pp 499-503. [PDF]

  • A. Bhaskarbhatla, S. Madhavanath, M. Pavan Kumar, A. Balasubramanian, and C. V. Jawahar - Representation and Annotation of Online Handwritten Data, Proceedings of the International Workshop on Frontiers in Handwriting Recognition(IWFHR), Oct. 2004, Tokyo, Japan, pp. 136--141. [PDF]

  • Pranav Reddy and C. V. Jawahar, The Role of Online and Offline Features in the Development of a Handwritten Signature Verification System, Proceedings of the National Conference on Document Analysis and Recognition(NCDAR), Jul. 2001, Mandya, India, pp. 85--94. [PDF]


 Associated People

  • A. Balasubramanian
  • Naveen Chandra Tewari
  • Anurag mangal
  • Anil Gavini
  • click here
  • Kartheek Alahari
  • Sachin Gupta
  • Geetika Katragadda
  • Anubhaw Srivastava
  • click here
  • Anand Kumar
  • Amit Sangroya
  • Haritha Bellam
  • Rama Praveen