Projected Texture for 3D Object Recognition


Introductiondeformation geometry

Three dimensional object are characterized by their shape, which can be thought of as the variation in depth over the object, from a particular view point. These variations could be deterministic as in the case of rigid objects or stochastic for surfaces containing a 3D texture. These depth variations are lost during the process of imaging and what remains is the intensity variations that are induced by the shape and lighting, as well as focus variations. Algorithms that utilize 3D shape for classification tries to recover the lost 3D information from the intensity or focus variations or using additional cues from multiple images, structured lighting, etc. This process is computationally intensive and error prone. Once the depth information is estimated, one needs to characterize the object using shape descriptors for the purpose of classification.

Image-based classification algorithms tries to characterize the intensity variations of the image of the object for recognition. As we noted, the intensity variations are affected by the illumination and pose of the object. The attempt of such algorithms is to derive descriptors that are invariant to the changes in lighting and pose. Although image based classification algorithms are more efficient and robust, their classification power is limited as the 3D information is lost during the imaging process.

pattern shift1We propose the use of structured lighting patterns, which we refer to as projected texture, for the purpose of object recognition. The depth variations of the object induces deformations in the projected texture, and these deformations encode the shape information. The primary idea is to view the deformation pattern as a characteristic property of the object and use it directly for classification instead of trying to recover the shape explicitly. To achieve this we need to use an appropriate projection pattern and derive features that sufficiently characterize the deformations. The patterns required could be quite different depending on the nature of object shape and its variation across objects.

 


 

feat

 saltSugar

3D Texture Classification

A feature "Normalized Histogram of Derivative of Gradients (NHoGD) is proposed to capture deformation statistic for parallel projection patterns.

Gradient directions in images are the directions of maximal intensity variation. In our scenario, the gradient directions can indicate the direction of the projected lines. As the lines get deformed with surface height variations, we compute the differential of the gradient directions in both x and y axes to measure the rate at which the surface height varies. The derivatives of gradients are computed at each pixel in the image, and the texture is characterized by a Histogram of the Derivatives of Gradients (HoDG). The gradient derivative histogram is a good indicator of the nature of surface undulations in a 3D texture. For classification, we treat the histogram as a feature vector to compare two 3D textures. As the distance computation involves comparing corresponding bins from different images, we normalize the counts in each bin of the histogram across all the samples in the training set. This normalization allows us to treat the distance between corresponding bins between histograms, equally, and hence employ the Euclidean distance for comparison of histograms. The Normalized histograms, or NHoDG is a simple but extremely effective feature for discriminating between different texture classes. Figure on right illustrates the computation of the NHoDG feature from a simple image with bell shaped intensity variation.

 

 

 


imgCategory Recognition for Rigid Objects

The primary concerns in developing a representation for object category is that the description should be invariant to both shape and pose of the object. Note that the use of projected patterns allows us to avoid object texture, and concentrate only on its shape. Approaches such as ’bag of words’ computed from interest points have been successfully employed for image based object category recognition.

Our approach is similar in spirit to achieve pose invariance. We learn the class of local deformations that are possible for each category of objects by creating a codebook of such deformations from a training set. Each object is then represented as a histogram of local deformations based on the codebook. Figure on left illustrates the computation of the feature vector from a scene with projected texture. There are two primary concerns to be addressed while developing a parts based shape representation: The location of points from which the local shape descriptor is computed is important to achieve position invariance. In image based algorithms, the patches are localized by using an interest operator that is computed from object texture or edges. However, in our case the primary objective is to avoid using texture information and concentrate on the shape information provided by the projected texture. Hence we choose to use a set of overlapping windows that covers the whole scene for computation of local deformations. Our representation based on the codebook allows us to concentrate on the object deformation for recognition. The description of the local deformations should be sufficient to distinguish between various local surface shapes within the class of objects. The feature vector used exploits the periodic nature of projected patterns. Since Fourier representation is an effective descriptor for periodic signals, and since we are interested in the nature of deformation and not its exact location, we compute magnitude or the absolute value of the Fourier coefficients (AFC) corresponding to each of the window patch as our feature vector. To make comparisons in a Euclidean space for effective, we use a logarithmic representation of these coefficients (LAFC). We show that this simple Fourier magnitude based representation of the patches can effectively achieve the discriminative power that we seek. The feature extraction process proceeds as follows: The images in the training set are divided into a set of overlapping windows of size 20×20 (decided experimentally). Each window is then represented using the magnitude of Fourier representation in logarithmic scale. This results in a 200 dimensional feature vector (due to symmetry of Fourier representation) for each window. A K-means clustering of windows in this feature space allows us to identify the dominant pattern deformations, which forms a codebook


comparison1

Recognition of Aligned Deterministic shapes

We have taken the example of hand geometry based person authentication for demonstrating our approach. We have collected dataset of 181 user with peg based alignment. We divided the hand image into a set of non-overlapping sub-windows, and compute the local textural characteristics of each window using a filter bank of 24 Gabor filters with 8 orientations and 3 scales (or frequencies).


Related Publications

  • Avinash Sharma, Nishant Shobhit and Anoop M. Namboodiri - Projected Texture for Hand Geometry based Authentication Proceedings of CVPR Workshop on Biometrics, 28 June, Anchorage, Alaska, USA. IEEE Computer Society 2008. [PDF]

 

  • Avinash Sharma, Anoop Namboodiri - Projected Texture for classification of 3D Texture Surface, Submitted to ECCV 2008 (Results awaited)
  • Avinash Sharma, Anoop Namboodiri - Object Category Recognition with Projected Texture, Submitted to ICPR 2008 (Results awaited)

  • Avinash Sharma - A Technical Report on Projected Texture for Object Recognition

Associated People

Learning Appearance Models


Introduction

Our reseach focuses on learning appearance models from images/videos that can be used for a variety of tasks such as recognition, detection and classification etc. Prior information such as geometry and kinematics is used to improve the quality of appearance models learnt thus enabling better performance at these tasks.


dynamic activities

Dynamic Activity Recognition

Many of the human activities such as Jumping, Squatting have a correlated spatiotemporal structure. They are composed of homogeneous units. These units, which we refer to as actions, are often common to more than one activity. Therefore, it is essential to have a representation which can capture these activities effectively. To develop this, we model the frames of activities as a mixture model of actions and employ a probabilistic approach to learn their low-dimensional representation. We present recognition results on seven activities performed by various individuals. The results demonstrate the versatility and the ability of the model to capture the ensemble of human activities.


eigen spaces

Boosting Appearance Models using Geometry

We developed novel method to construct an eigen space representation from limited number of views, which is equivalent to the one typically obtained from large number of images. This procedure implicitly incorporates a novel view synthesis algorithm in the eigen space construction process. Inherent information in an appearance representation is enhanced using geometric computations. We experimentally verify the performance for orthographic, affine and projective camera models. Recognition results on the COIL and SOIL image database are promising.


Face Video Manipulation using Tensorial Fatorization

expression transfer

We use Tensor Factorization for manipulating videos of human faces. Decomposition of a video represented as a tensor into non-negative rank-1 factors results in sparse and separable factors equivalent to a local parts decomposition of the object in the video. Such a decomposition can be used for tasks like expression transfer and face morphing. For instance, given a facial expression video it can be represented as a tensor which can then be factorized. The factors that best represent the expression can be identied which can then be transfered to another face video thus transferring the expression. A good solution to the problem of expression transfer would require explicit modeling of the expression and its interaction with the underlying face content. Instead the method proposed here is purely appearance based and the results demonstrate that the proposed method is a simple alternative to the popular complex solution.


Related Publication

  • S. Manikandan, Ranjeeth Kumar and C.V. Jawahar - Tensorial Factorization Methods for Manipulation of Face Videos, The 3rd International Conference on Visual Information Engineering 26-28 September 2006 in Bangalore, India. [PDF]

  • Ranjeeth Kumar, S. Manikandan and C. V. Jawahar - Task Specific Factors for Video Characterization, 5th Indian Conference on Computer Vision, Graphics and Image Processing, Madurai, India, LNCS 4338 pp.376-387, 2006. [PDF]

  • Paresh K. Jain, Kartik Rao P. and C. V. Jawahar - Computing Eigen Space from Limited Number of Views for Recognition, 5th Indian Conference on Computer Vision, Graphics and Image Processing, Madurai, India, LNCS 4338 pp.662-673, 2006. [PDF]

  • S. S. Ravi Kiran, Karteek Alahari and C. V. Jawahar, Recognizing Human Activities from Constituent Actions, Proceedings of the National Conference on Communications (NCC), Jan. 2005, Kharagpur, India, pp. 351-355. [PDF]

 

 


Associated People

Depth-Image Representations


Introduction

Depth Images are viable representations that can be computed from the real world using cameras and/or other scanning devices. The depth map provides a 2 and a half D structure of the scene. The depth map gives a visibility-limited model of the scene and can be rendered easily using graphics techniques. A set of Depth Images can provide hole-free rendering of the scene. Multiple views need to be blended to provide smooth hole-free rendering. Such a representation of the scene is bulky and needs good algorithms for real-time rendering and efficient representation. A GPU-based algorithm can render large models represented using DIs in real time.

The image representation of the depth map may not lend itself nicely to standard image compression techniques, which are psychovisually motivated. The scene representation using multiple Depth Images contains redundant descriptions of common parts and can be compressed together. Compressing these depth maps using standard techniques such as LZW and JPEG and comparing the quality of rendered novel views by varying the quality factors of JPEG can give us a good trade-off analysis between the quality and compression ratio. Multiview compression of texture images can be performed by exploiting the constraints between views such as disparity, epipolar constraint,multilinear tensors, etc.


GPU Rendering & DI Compression

DIR1

We aim at rendering big,complex scenes efficiently from their Depth maps. Some of the features of the system are ::

  • The novel view point is not restricted and can be anywhere in the seen, unlike view morphing.
  • The visibility-limited aspect of the representation provides several locality properties. A new view will be affected only by depths and textures in its vicinity.
  • Use of multiple depth images to fill in the hole regions created due to the lack of complete information in a single depth map.
  • Only Valid views according to the thresholding angle are processed for the rendering, there by reducing the computation time.
  • GPU algorithm gives a multiple times better FPS than the CPU algorithm.
  • Frame buffer objects and Vertex Buffer objects improve on the performance and memory management of the rendering.
  • Resolution can be changed by the subsampling the grid and thus reducing the number of primitves to be drawn.

DIR2The scene representation using multiple Depth Images contains redundant descriptions of common parts. Our Compression methods aim at exploiting this redundancy for a compact representation. The various kinds of compression algorithms tried are ::

  • LZW Compression (lossless technique) is applied on depth maps using gzip
  • JPEG Compression :: Depth maps are compressed with various quality factors.
  • Quad Tree Based Compression :: If the block of any image/depth map represents one particular value it is stored as one single node in the tree.
  • MPEG compression :: All the frames are used to generate a movie sequence to get the encoded image.
  • Geometry Proxy Model :: It is an approximate description of the scene used to model the common, position independent,scene structure.
  • Progressive Compression :: Differences are added bit by bit progressivly this allows for smoother levels of detail.
  • Quality Levels :: Levels of Details (LODs) are varied to control the rendering time through the number of primitives or size of the model and texture.

Related Publication

  • Pooja Verlani, Aditi Goswami, P.J. Narayanan, Shekhar Dwivedi and Sashi Kumar Penta - Depth Image: representations and Real-time Rendering, Third International Symposium on 3D Data Processing, Visualization and Transmission,North Carolina, Chappel Hill, June 14-16, 2006. [PDF]

  • Sashi Kumar Penta and P. J. Narayanan, - Compression of Multiple Depth-Maps for IBR, The Visual Computer, International Journal of Computer Graphics, Vol. 21, No.8-10, September 2005, pp. 611--618. [PDF]

  • P. J. Narayanan, Sashi Kumar P and Sireesh Reddy K, Depth+Texture Representation for Image Based Rendering, Proceedings of the Indian Conference on Vision, Graphics and Image Processing(ICVGIP), Dec. 2004, Calcutta, India, pp. 113--118. [PDF]


Associated People

  • Pooja Verlani
  • Aditi Goswami
  • Naveen Kumar
  • Saurabh Aggrawal
  • Shekhar Dwivedi
  • Sireesh Reddy K
  • Sashi Kumar Penta
  • Prof. P. J. Narayanan

Biological Vision


Introduction

The perceptual mechanisms used by different organisms to negotiate the visual world are fascinatingly diverse. Even if we consider only the sensory organs of vertebrates, such as eye, there is much variety. Several disciplines have approached the problem of investigating how sensory, motors and central visual systems function and are oganised. Area of biological vision aims to build a computational understanding of various brian mechanisms. Synergy between biological and computer vision research can be found in low-level vision. Substantial insights about the processes for extracting colour, edge, motion and spatial frequency information from images have come from combining computational and neuro-physiological constraints. Understanding of human perception/vision is said to be an early step towards indetifying objects and understanding of scene.


Work Undertaken

texture06

:: Towards Understanding Texture Processing :: 

A fundamental goal of texture research is to develop automated computational methods for retrieving visual information and understanding image content based on textural properties in images. A synergy between biological and computer vision research in low-level vision can give substantial insights about the processes for extracting color, edge, motion, and spatial frequency information from images. In this thesis, we seek to understand the texture processing that takes place in low level human vision in order to develop new and effective methods for texture analysis in computer vision. The different representations formed by the early stages of HVS and visual computations carried out by them to handle various texture patterns is of interest. Such information is needed to identify the mechanisms that can be use in texture analysis tasks. (more detail...)

:: Biologically Inspired Interest Point Operator ::
poich1Interest point operators (IPO) are used extensively for reducing computational time and improving the accuracy of several complex vision tasks such as object recognition or scene analysis. SURF, SIFT, Harris,Corner points etc., are popular examples. Though there exists a large number of IPOs in the vision literature, most of them rely on low level features such as color,edge orientation etc., making them sensitive to degradation in the images.

Human vision systems (HVS) perform these tasks with seemingly little effort and are robust to such degradation by employing spatial attention mechanisms to reduce the computational burden. Extensive studies of these spatial attention mechanisms have led to several computational models (eg. Itti, Koch). However, very few models have found successful applications in computer vision related tasks partly owing to their prohibitive

Computational attention systems either have used top-down or bottom-up information. Using both types of information is an attractive choice for top-down knowledge is quite helpful particularly when images are degraded [Antonio Torralba]. Our work is focused on developing a robust biologically-inspired IPO capable of utilizing top-down knowledge. The operator will be tested as a feature detector /descriptor for a Monocular Visual SLAM.

Antonio Torralba, Contextual Priming for Object Detection, IJCV,Vol.53, 2,2003,Pages:169--191.
Laurent Itti, Christof Koch, A saliency based search mechanism for overt and covert shifts of visual attention,Vision Research,Vol.40,2000,Pages: 1489--1506


Current under-going Projects

  • Medical Image Reconstruction on Hexagonal Grid
  • Computational Understanding of Medical Image Interpretation by Expert

Related Publications

  • N.V. Kartheek Medathati, Jayanthi Sivaswamy - Local Descriptor based on Texture of Projections Proceedings of Seventh Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP'10),12-15 Dec. 2010,Chennai, India. [PDF]

  • Joshi Datt Joshi, Saurabh Garg and Jayanthi Sivaswamy - Script Identification from Indian Documents, Proceedings of IAPR Workshop on Document Analysis Systems (DAS 2006), Nelson, pp.255-267. [PDF]

  • Gopal Datt Joshi, Saurabh Garg and Jayanthi Sivaswamy - A Generalised Framework for Script Identification Proc. of International Journal for Document Analysis and Recognition(IJDAR), 10(2), pp.55-68, 2007. [PDF]

  • Gopal Datt Joshi and Jayanthi Sivaswamy - A Computational Model for Boundary Detection, 5th Indian Conference on Computer Vision, Graphics and Image Processing, Madurai, India, LNCS 4338 pp.172-183, 2006. [PDF]

  • Gopal Datt Joshi, and Jayanthi Sivaswamy - A Simple Scheme for Contour Detection, Proceedings of International Conference on Computer Vision and Applications (VISAP 2006), Setubal. [PDF]

  • L.Middleton and J. Sivaswamy, Hexagonal Image Processing, Springer Verlag, London, 2005, ISBN: 1-85233-914-4. [PDF]

  • Gopal Datt Joshi , and Jayanthi Sivaswamy - A Multiscale Approach to Contour Detection, Proceedings of International Conference on Cognition and Recognition ,pp. 183-193, Mysore, 2005. [PDF]

  • L. Middleton and J. Sivaswamy - A Framework for Practical Hexagonal-Image Processing, Journal of Electronic Imaging, Vol. 11, No. 1, January 2002, pp. 104--114. [PDF]

 


Associated People

The Garuda: A Scalable, Geometry Managed Display Wall


Introduction

Cluster-based tiled display walls simultaneously provide high resolution and large display area (Focus + Context) and are suitable for many applications. They are also cost-effective and scalable with low incremental costs. Garuda is a client-server based display wall solution designed to use off-the-shelf graphics hardware and standard Ethernet network. Garuda uses an object-based scene structure represented using a scene graph. The server determines the objects visible to each display-tile using a novel adaptive algorithm that culls an object hierarchy to a frustum hierarchy. Required parts of the scenegraph are transmitted to the clients which cache them to exploit inter-frame redundancy in the scene. A multicast-based protocol is used to transmit the geometry exploiting the spatial redundancy present especially on large tiled displays. A geometry push philosophy from the server helps keep the clients in sync with one another. No node including the server needs to render the entire environment, making the system suitable for interactive rendering of massive models. Garuda is built on the Open scene graph system and can transparently render any OSG-based application to a tiled display wall without any modification by the user.


Features

The Garuda System provides ::full pp

  • Cluster based large display solution: Low in cost, easier to maintain and scale than Monolithic solutions.
  • Focus and Context: Renders large scenes with high detail, user is able to see the entirety of the scene along with fine details.
  • Driven from Graphics, not images: Interactive applications. Not only increase in size but also in resolution.
  • Parallel Rendering: Distributed rendering capabilities of a cluster used for parallel rendering. No individual system has to render the entire scene.
  • Transparent Rendering for OSG: More applicability, any Open Scene Graph application can be rendered to the system without any modification.
  • Capability to handle dynamic scenes: The system can handle dynamic OSG environment rendering at interactive frame rates.
  • Low Network Load: The system has very low network requirements owing to a Server push philosophy and caching at clients.

fatehFeatures of The Garuda System ::

  • Scalable to large tile configurations, up to 7x7 tile sizes on single server. A hierarchy of servers can be used to support even larger tile sizes.
  • Caching at the clients and use of multicast keeps the network requirements low, the system can handle huge tile configurations on a single 100Mbps network.
  • Using a novel culling algorithm the system scales sub-linearly to arbitrary large tile configurations. Please see: Adaptive Culling Algorithm for details.
  • No recompilation or relinking of OSG code necessary for rendering to a display wall.
  • Using distributed rendering the system can render massive models which could not be rendered at interactive frame rates using a single machine.

 

 


Related Publications

 

  • Nirnimesh, Pawan Harish and P. J. Narayanan - Garuda: A Scalable, Tiled Display Wall Using Commodity PCs Proc. of IEEE Transactions on Visualization and Computer Graphics(TVCG), Vol.13, no.~5, pp.864-877, 2007. [PDF]

  • Nirnimesh, Pawan Harish and P. J. Narayanan - Culling an Object Hierarchy to a Frustum Hierarchy, 5th Indian Conference on Computer Vision, Graphics and Image Processing, Madurai, India, LNCL 4338 pp.252-263,2006. [PDF]


Associated People