Efficient Texture Mapping by Homogeneous Patch Discovery
R. Vikram Pratap Singh (homepage)
All visible objects have shape and texture. The main aim of computer graphics is to represent and render real world objects efficiently and realistically. To make the objects look realistic from a geometric point of view, we have to make sure that the shape and texture of the object are accurate. In practice, shape is either hand crafted using 3D modeling tools such as Blender, or is acquired from real world objects using 3D reconstruction techniques. Texture is the second aspect of appearance that must be ensured to make the rendered objects look real. The texture has to be pasted on the surface in such a manner that it perceptual corresponds to the correct part of the mesh. This process of pasting the texture on the surface of a mesh model is called Texture mapping. Texture mapping can be done in two ways. First way is to texture a surface by synthesizing the texture directly on the surface. The second method is to wrap a synthesized texture around the surface and cut and merge the seams so that it fits correctly on the surface. To visualize this problem we can think of texture as a cloth. The first method can be thought of weaving around the body to fit it exactly like a sweater, while the second method is like cutting and stitching an already woven cloth according to the shape of the mesh model. In this thesis we propose a new method that follows the second approach. The primary goal of our method is to map a texture on to large mesh model at interactive rates, while maintaining the perceived quality.
The primary technique for mapping a flat texture (image) onto an arbitrary shaped mesh model is to parameterize the shape, which defines a mapping from points on the mesh surface onto a 2D plane. When parameterizing these mesh models, we try to maintain the geometric correspondence between the mesh vertices intact to reduce the distortion of the texture. Typically, parameterizing a mesh model involves solving a set of linear equations representing the geometric correspondence of the triangles. The approach involves defining an energy function for the mapping and searching for a global optimum which minimizes the distortions during the mapping. Such methods are capable to achieving texture mappings that has high perceptual quality. However, typical energy minimization procedures are computationally expensive and cannot be applied for real time applications or with large mesh models.
To complement the proposed texture mapping algorithm, we introduce a method to make a texture self tileable. This allows us to store only the texel structure, if the required texture is repetitive. We present qualitative and quantitative results in comparison with several other texture mapping algorithms. The proposed algorithm is robust in terms of the output quality and can find applications in different scenarios such as rapid prototyping, where you require interactive texture mapping rates and the ability to deal with dynamic mesh topology. It can also be used for applications such as large monument visualizations, where we need to deal with large and noisy mesh models that are generated using techniques such as multi-view stereo. (more...)
Some Results :






| Year of completion: | June 2014 |
| Advisor : |
Related Publications
- R. Vikram Pratap Singh, Anoop M Namboodiri - Efficient texture mapping by homogeneous patch discovery, ICVGIP 2012 (Oral).
Reliefs carvings have certain specific attributes that makes them different from regular sculptures, which can be exploited in different computer vision tasks. Repetitive patterns are one such frequently occurring phenomenon in reliefs. Algorithms for detection of repeating patterns in images often assume that the repetition is regular and highly similar across the instances. Approximate repetitions are also of interest in many domains such as hand carved sculptures, wall decorations, groups of natural objects, etc. Detection of such repetitive structures can help in applications such as image retrieval, image inpainting and 3D reconstruction. In this work, we look at a specific class of approximate repetitions: those in images of hand carved relief structures. We present a robust hierarchical method for detecting such repetitions. Given a single relief panel image, our algorithm finds dense matches of local features across the image at various scales. The matching features are then grouped based on their geometric configuration to find repeating elements. We also propose a method to group the repeating elements to segment the repetitive patterns in an image. In relief images, foreground and background have nearly the same texture, and matching of a single feature would not provide reliable evidence of repetition. Our grouping algorithm integrates evidences of repetition to reliably find repeating patterns. Input image is processed on a scale-space pyramid to effectively detect all possible repetitions at different scales. Our method has been tested on images with large varieties of complex repetitive patterns and the qualitative results show the robustness of our approach.Point-based rendering suffer from the limited resolution of the fixed number of samples representing the model. At some distance, the screen space resolution is high relative to the point samples, which causes under-sampling. A better way of rendering a model is to re-sample the surface during the rendering at the desired resolution in object space, guaranteeing a sampling density sufficient for image resolution. Output sensitive sampling samples objects at a resolution that matches the expected resolution of the output image. This is crucial for hole-free point-based rendering. Many technical issues related to point-based graphics boil down to reconstruction and re-sampling. A point based representation should be as small as possible while conveying the shape well.
Reconstructing geometric models of relief carvings are also of great importance in preserving her- itage artifacts, digitally. In case of reliefs, using laser scanners and structured lighting techniques is not always feasible or are very expensive given the uncontrolled environment. Single image shape from shading is an underconstrained problem that tries to solve for the surface normals given the intensity image. Various constraints are used to make the problem tractable. To avoid the uncontrolled lighting, we use a pair of images with and without the flash and compute an image under a known illumination. This image is used as an input to the shape reconstruction algorithms. We present techniques that try to reconstruct the shape from relief images using the prior information learned from examples. We learn the variations in geometric shape corresponding to image appearances under different lighting conditions using sparse representations. Given a new image, we estimate the most appropriate shape that will result in the given appearance under the specified lighting conditions. We integrate the prior with the normals computed from reflectance equation in a MAP framework. We test our approach on relief images and compare them with the state-of-the-art shape from shading algorithms. (