CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
    • Post-doctoral
    • Honours Student
  • Research
    • Publications
    • Thesis
    • Projects
    • Resources
  • Events
    • Talks and Visits
    • Major Events
    • Visitors
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Banners
  • Contact Us
  • Login

Deep Learning Methods for 3D Garment Digitization


Astitva Srivastava

Abstract

The reconstruction of 3D objects from monocular images is an active field of research in 3D computer vision which is further boosted by advancements in deep learning. In context of human body, modeling realistic 3D virtual avatars from 2D images is a recent trend, thanks to the advent of AR/VR & metaverse. The problem is challenging, owing to non-rigid nature of human body, especially because of the garments. Various attempts have been made to solve the problem, at least for relatively tighter clothing styles, but loose clothing styles still pose a huge challenge. This problem has also sparked quite an interest in the fashion e-commerce domain, where the objective is to model the 3D garments, independent from the underlying body, in order to enable intriguing applications like virtual try-on systems. 3D garment digitization has been garnering a lot of interest in the past few years, as the demand for online window-shopping and other e-commerce activities has increased in the recent years, where the unfortunate crisis of COVID-19 plays a huge role. Though the problem of 3D digitization of garments seems intriguing, solving it is not as straightforward as it looks. There are existing works out there in the field, majority of which are deep learning based solutions. Most of these methods rely on predefined garment templates which makes the task of texture synthesis easier, but restrict the usage to a fixed number of garment styles for which templates are available. Additionally, these methods do not deal with issues like complex poses and self-occlusions which are very common under in-the-wild assumption. Template-free methods are also explored which enables modeling arbitrary clothing styles, however, they lack texture information which is essential for high-quality photorealistic appearance. The thesis aims to resolve aforementioned issues by providing novel solutions. The main objective is 3D digitization of garments from a monocular RGB image of a person wearing the garment, both in template-based and template-free settings. Initially, we address challenges in existing state-of-the-art template-based methods. We aim to handle complex human poses, occlusions etc. by proposing to use a robust keypoint regressor which estimates keypoints on input monocular image. These keypoints define thin-plate-spline (TPS) based warping of texture from input image to the UV space of a predefined template. Then, we utilize a deep inpainting network to handle missing texture information. In order to train these neural networks, we curate a synthetic dataset of garments with varying textures, draped on 3D human characters in various complex poses. This dataset helps in robust training and generalization to real images. We achieve state-of-the-art results for specific clothing styles (e.g. t-shirt and trouser). However, template-based methods cannot model any arbitrary garment style. Therefore, we next aim to handle arbitrary garment styles in a template-free setting. Existing state-of-the-art template-free methods can model geometrical details of arbitrary garment styles up to some extent, but fail to recover texture information. To model arbitrary geometry of garments, we propose to use an explicit, sparse representation introduced for modeling human body. This representation handles self-occlusion and loose clothing as well. We extend this representation by introducing semantic segmentation information for differentiating between various clothing styles (top wear /bottom wear) and human body present in the input image. Furthermore, this representation is exploited in a novel way to provide seams for texture mapping, thereby retaining high-quality textural details and providing way to lot of useful applications like texture editing, appearance manipulation, texture super-resolution etc. The proposed method is the first one to model arbitrary garment styles and recover textures as well. We evaluate our proposed solutions on various publicly available datasets, outperforming existing state-of-the-art methods. We also discuss the limitations in the proposed methods and provide potential solutions that can be explored. Finally, we discuss the future extensions of the proposed methods. We believe this thesis significantly improves the research landscape in 3D garment digitization and accelerates the progress in this direction.

Year of completion:  August 2022
 Advisor : Avinash Sharma

Related Publications


    Downloads

    thesis

    Skeleton-based Action Recognition in Non-contextual, In-the-wild and Dense Joint Scenarios


    Neel Trivedi

    Abstract

    Human action recognition, with its irrefutable and varied use cases across fields of surveillance, robotics, human object interaction analysis and many more, has gained critical importance and attention in the field of compute vision. Traditionally entirely based on RGB sequences, action recognition domain has shifted focus towards using skeleton sequences due to the easy availability of skeleton data capturing apparatus and the release of large scale datasets, in recent years. Skeleton based human action recognition, having superiority in terms of privacy, robustness and computational efficiency over traditional RGB based action recognition, is the primary focus of this thesis. Ever since the release of large scale skeleton action datasets namely NTURGB+D and NTURGB+D 120, the community has solely focused on developing complex approaches, ranging from CNNs to complex GCNs and more recently transformers, to achieve the best classification accuracy for these datasets. However, in this rat race for state of the art performance, the community turned a blind eye to a major drawback at the data level which bottlenecks even the most sophisticated approaches. This drawback is where we start our explorations in this thesis. The pose tree provided in the NTURGB+D datasets contains only 25 joints, out of which only 6 joints (3 for each hand) are finger joints. This is a major drawback since only 3 finger level joints are not sufficient enough to distinguish between action categories such as ”Thumbs up” and ”Thumbs down” or ”Make ok sign” and ”Make victory sign”. To specifically address this bottleneck, we introduce two new pose based human action datasets - NTU60-X and NTU120-X. Our datasets extend the largest existing action recognition dataset, NTU-RGBD. In addition to the 25 body joints for each skeleton as in NTURGBD, NTU60-X and NTU120-X dataset include finger and facial joints, enabling a richer skeleton representation. We appropriately modify the state of the art approaches to enable training using the introduced datasets. Our results demonstrate the effectiveness of these NTU-X datasets in overcoming the aforementioned bottleneck and improving the state of the art performance, overall and on previously worst performing action categories. Pose-based action recognition is predominantly tackled by approaches that treat the input skeleton in a monolithic fashion, i.e. joints in the pose tree are processed as a whole. However, such approaches ignore the fact that action categories are often characterized by localized action dynamics involving only small subsets of part joint groups involving hands (e.g. ‘Thumbs up’) or legs (e.g. ‘Kicking’). Although part-grouping based approaches exist, each part group is not considered within the global pose frame, causing such methods to fall short. Further, conventional approaches employ independent modality streams (e.g. joint, bone, joint velocity, bone velocity) and train their network multiple times on these streams, which massively increases the number of training parameters. To address these issues, we introduce PSUMNet, a novel approach for scalable and efficient pose-based action recognition. At the representation level, we propose a global frame based part stream approach as opposed to conventional modality based streams. Within each part stream, the associated data from multiple modalities is unified and consumed by the processing pipeline. Experimentally, PSUMNet achieves the state of the art performance on the widely used NTURGB+D 60/120 dataset and dense joint skeleton dataset NTU 60-X/120-X. PSUMNet is highly efficient and outperforms competing methods which use 100%-400% more parameters. PSUMNet also generalizes to the SHREC hand gesture dataset with competitive performance. Overall, PSUMNet’s scalability, performance and efficiency make it an attractive choice for action recognition and for deployment on compute-restricted embedded and edge devices. Finally, we conclude this thesis by exploring new and more challenging frontiers under the umbrella of skeleton action recognition namely ”in the wild” skeleton action recognition and ”non-contextual” skeleton action recognition. We introduce Skeletics-152, a curated and 3D pose dataset derived from the RGB videos included in the larger Kinetics-700 dataset to explore in the wild skeleton action recognition. We further introduce, Skeleton-mimetics, a 3D pose dataset derived from recently introduced non-contextual action dataset-Mimetics. By benchmarking and analysing various approaches on these two new dataset we lay the ground for future exploration in these two challenging problems within skeleton action recognition. Overall in this thesis, we draw attention to prevailing drawbacks in the existing skeleton action datasets and introduce extensions of these datasets to counter their shortcomings. We also introduce a novel, efficient and highly reliable skeleton action recognition approach dubbed PSUMNet. Finally, we explore more challenging tasks of in the wild and non-contextual action recognition.

    Year of completion:  September 2022
     Advisor : Ravi Kiran Sarvadevabhatla

    Related Publications


      Downloads

      thesis

      Casual Scene Capture and Editing for AR/VR Applications


      Pulkit Gera

      Abstract

      Augmented Reality and Virtual Reality(AR/VR) applications can become far more widespread if they can photo-realistically capture our surroundings and modify them in different ways. It could include editing the scene’s lighting, changing the objects’ material, or augmenting virtual objects onto the scene. There has been a significant amount of work done in this domain. However, most of these works capture the data in a controlled setting consisting of expensive setups such as light stages. These methods are impractical and cannot scale. Thus, we must design solutions that capture scenes casually from offthe-shelf devices commonly available to the public. Further, the user should be able to interact with the captured scenes and modify these scenes in exciting directions, such as editing the material or augmenting new objects into the scene. In this thesis, we study how we can produce novel views of a casually captured scene and modify them in interesting ways. First, we present a neural rendering framework for simultaneous novel view synthesis and appearance editing of a casually captured scene using off-the-shelf smartphone cameras under known illumination. Existing approaches cannot perform novel view synthesis and edit the materials of the scene objects. We propose a method to explicitly disentangle appearance from lighting while estimating radiance and learn an independent lighting estimation of the scene. This allows us to generalize arbitrary changes in the scene’s materials while performing novel view synthesis. We demonstrate our results on synthetic and real scenes. Next, we present PanoHDR-NeRF, a neural representation of an indoor scene’s high dynamic range (HDR) radiance field that can be captured casually without elaborate setups or complex capture protocols. First, a user captures a low dynamic range (LDR) omnidirectional video of the scene by freely waving an off-the-shelf camera around the scene. Then, an LDR2HDR network converts the captured LDR frames to HDR, which are subsequently used to train a modified NeRF++ model. The resulting PanoHDR-NeRF representation can synthesize full HDR panoramas from any location in the scene. We also show that the HDR images produced by PanoHDR-NeRF can synthesize correct lighting effects, enabling the augmentation of indoor scenes with synthetic objects that are lit correctly. Through these works, we demonstrate how we can casually capture scenes for AR/VR applications that the user can further edit.

      Year of completion:  August 2022
       Advisor : P J Narayanan, Jean-François Lalonde

      Related Publications


        Downloads

        thesis

        Towards Understanding Deep Saliency Prediction


        Navyasri M

        Abstract

        Learning computational models for visual attention (saliency estimation) is an effort to inch machines/robots closer to human visual cognitive abilities. Data-driven efforts have dominated the landscape since the introduction of deep neural network architectures. In deep learning research, the choices in architecture design are often empirical and frequently lead to more complex models than necessary. The complexity, in turn, hinders the application requirements. In this work, we identify four key components of saliency models, i.e., input features, multi-level integration, readout architecture, and loss functions. We review the existing state of the art models on these four components and propose novel and simpler alternatives. As a result, we propose two novel end-to-end architectures called SimpleNet and MDNSal, which are neater, minimal, more interpretable and achieve state of the art performance on public saliency benchmarks. SimpleNet is an optimized encoder-decoder architecture and brings notable performance gains on the SALICON dataset (the largest saliency benchmark). MDNSal is a parametric model that directly predicts parameters of a GMM distribution and is aimed to bring more interpretability to the prediction maps. The proposed saliency models can be inferred at 25fps, making them suitable for real-time applications. We also explore the possibility of improving saliency prediction in videos by using the image saliency models and existing work.

        Year of completion:  May 2021
         Advisor : Vineet Gandhi

        Related Publications


          Downloads

          thesis

          Retinal Image Synthesis


          Anurag Anil Deshmukh

          Abstract

          Medical imaging has been aiding diagnosis and treatment of diseases by creating visual representations of the interior of the human body. Experts hand-mark these images for abnormalities and diagnosis. Supplementing experts with these rich visualization has enabled detailed clinical analysis and rapid medical intervention. However, deep learning-based methods rely on abundantly large volumes of data for training. Procuring data for medical imaging applications is especially difficult because abnormal cases by definition are rare and the data, in general, requires experts for labelling. With Deep learning algorithms, data with high class imbalance or of insufficient variability leads to poor classification performance. Thus, alternate approaches like using generative modelling to artificially generate more data have been of interest. Most of these methods are GAN [11] based approaches. While they can be helpful with data imbalance they still require a lot of data to be able to generate realistic images. Additionally, a lot of these methods have been shown to work on natural images where the images are relatively noise-free and smaller artifacts aren’t as damaging. Thus, this thesis aims at providing synthesis methods which overcome the limitations of smaller datasets and noisy profile. We do this for two different modalities, Fundus imaging and Optical Coherence Tomography (OCT). Firstly, we present a fundus image synthesis method aimed at providing paired Optic Cup and Image data for Optic Cup (OC) Segmentation. The synthesis method works well on small datasets by minimising the information to be learnt by leveraging domain-specific knowledge and by providing most of the structural information to the network. We demonstrate this method’s advantages over a more direct synthesis method. We show how leveraging domain-specific knowledge can provide higher quality images and annotations. Inclusion of these generated images and their annotations in training of an OC segmentation model showed a significant improvement in performance, thus showing their reliability. Secondly, we present a novel unpaired image to image translation method which can introduce abnormality (Drusen) to OCT images while avoiding artifacts and preserving the noise profile. Comparison with other state-of-the-art images to image translation methods shows that our method is significantly better at preserving the noise profile and is better at generating morphologically accurate structures.

          Year of completion:  April 2021
           Advisor : Jayanthi Sivaswamy

          Related Publications


            Downloads

            thesis

            More Articles …

            1. Leveraging Structural Cues for Better Training and Deployment in Computer Vision
            2. Human head pose and emotion analysis
            3. Super-resolution of Digital Elevation Models With Deep Learning Solutions
            4. Enhancing OCR Performance with Low Supervision
            • Start
            • Prev
            • 13
            • 14
            • 15
            • 16
            • 17
            • 18
            • 19
            • 20
            • 21
            • 22
            • Next
            • End
            1. You are here:  
            2. Home
            3. Research
            4. Thesis
            5. Thesis Students
            Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.