CVIT Home CVIT Home
  • Home
  • People
    • Faculty
    • Staff
    • PhD Students
    • MS Students
    • Alumni
  • Research
    • Publications
    • Journals
    • Books
    • MS Thesis
    • PhD Thesis
    • Projects
    • Resources
  • Events
    • Summer School 2026
    • Talks and Visits
    • Major Events
    • Summer Schools
  • Gallery
  • News & Updates
    • News
    • Blog
    • Newsletter
    • Past Announcements
  • Contact Us

Towards Understanding and Improving the Generalization Performance of Neural Networks


sarath sivaprasad

Abstract

The widespread popularity of over-parameterized deep neural networks (NNs) is backed by its ‘unreasonable’ performance on unseen data that is Independent, Identically Distributed with respect to the train data. The generalization of NNs cannot be reasoned with the traditional machine learning wisdom that the increase in the number of parameters leads to overfitting on train samples and subsequently reduced generalization. Various generalization measures have been proposed in recent times to explain the generalization property of deep learning networks. Despite some promising investigations, there is little consensus on how we can explain the generalization of NNs. Furthermore, the ability of neural networks to fit any train data for any random configuration of labels makes it more challenging to explain their generalization performance. Despite this ability to completely fit any given data, the neural network seems to be able to ‘cleverly’ learn a generalizing solution. We hypothesize that the ‘simple’ solution lies in a constrained subspace of the hypothesis space. We propose a constraint formulation of neural networks to close the generalization gap. We show that, through a principled constraint, we can achieve comparable train test performance for neural networks. We propose a way to constrain each output of the neural network as the convex combination of its inputs to ensure certain desirable geometry of the decision boundaries. This document covers two major aspects. Firstly, we show the improved generalization of neural networks using convex constraints. The second section goes beyond the IID setting and investigates the generalization of neural networks on the Out Of Distribution test sets.In the first section of the document, we investigate the constrained formulation of neural networks where the output is a convex function of the input. We show that the convexity constraints can be enforced on both fully connected and convolutional layers, making them applicable to most architectures. The convexity constraints include restricting the weights (for all but the first layer) to be non-negative and using a non-decreasing convex activation function. Albeit simple, these constraints have profound implications on the generalization abilities of the network. We draw three valuable insights: (a) Input Output Convex Neural Networks (IOC-NNs) self regularize and significantly reduce the problem of overfitting; (b) Although heavily constrained, they outperform the base multi-layer perceptrons and achieve similar performance as compared to base convolutional architectures and (c) IOCNNs show robustness to noise in train labels. We demonstrate the efficacy of the proposed idea using thorough experiments and ablation studies on six commonly used image classification datasets with three different neural network architectures.In the second section, we revisit the ability of networks to completely fit any given data and yet ‘cleverly’ learn the generalizing hypothesis from the large number of variances that can explain the train data. In accordance with concurrent findings, our explorations show that neural networks learn the most ‘low-lying’ variance in the data. They learn the features that easily correlate with the label and do no further exploration after finding such a solution. With this insight, we reinvent the need to understand the generalization of neural networks and improve them. We go beyond traditional IID and OOD evaluation benchmarks to further our understanding of learning in deep networks. Through our explorations, we give a possible explanation as to why neural networks can do well in certain benchmarks and why other inventive methods fail to give any consistent improvement over a simple neural network. Domain Generalization (DG) requires a model to learn a hypothesis from multiple distributions that generalizes to an unseen distribution. DG has been perceived as a front face of OOD generalization. We present empirical evidence to show that the primary reason for generalization in DG is the presence of multiple domains while training. Furthermore, we show that methods for generalization in IID are equally important for generalization in DG. Tailored methods fail to add performance gains in the Traditional DG (TDG) evaluation. Our experiments prompt if TDG has outlived its usefulness in evaluating OOD generalization? To further strengthen our investigation, we propose a novel evaluation strategy, ClassWise DG (CWDG), where for each class, we randomly select one of the domains and keep it aside for testing. We argue that this benchmarking is closer to human learning and relevant in real-world scenarios. Counter-intuitively, despite being exposed to all domains during training, CWDG is more challenging than TDG evaluation. While explaining the observations, our work makes a case for more fundamental analysis around the DG problem before exploring new ideas to tackle it Keywords – generalization of deep networks, constrained formulation, input-output-convex neural network, robust generalization bounds, explainable decision boundaries, mixture of experts

Year of completion:  May 2022
 Advisor : Vineet Gandhi

Related Publications


    Downloads

    thesis

    Exploiting Cross-Modal Redundancy for Audio-Visual Generation


    Sindhu B Hegde

    Abstract

    We interact with the world around us through multiple sensory streams of information such as audio, vision, and text (language). Each of these streams complement each other, but also contain redundant information, albeit in different forms. For example, the content of a person speaking can be captured by listening to the sounds in the speech, or partially understood by looking at the speaker’s lip movements, or by reading out the text transcribed from the vocal speech. This redundancy across modalities is utilized in human perceptual understanding that helps us to solve various practical problems. However, in the real-world, more often than not, information in individual streams is corrupted by various types of degradation like electronic transmission, background noise, and blurring which lead to deterioration in the content quality. In this work, we aim to recover the distorted signal in a given stream by exploiting the redundant information in another stream. Specifically, we deal with talking-face videos involving vision and speech signals. We propose two core ideas to explore cross-modal redundancy: (i) denoising speech using visual assistance, and (ii) upsampling very low-resolution talking-face videos using audio assistance. The first part focuses on the task of speech denoising. We show that the visual stream helps in distilling the clean speech from the corrupted signal by suppressing the background noise. We identify the key issues prevailing in the existing state-of-the-art speech enhancement works: (i) most of the current works use only the audio stream and are limited in their performance in a wide range of realworld noises, and (ii) few recent works use the lip-movements as additional cues with an aim to improve the quality of the generated speech over “audio-only” methods. However, they cannot be applied for several applications where the visual stream is unreliable or completely absent. Thus, in this work, we propose a new paradigm for speech enhancement: “pseudo-visual” approach, where the visual stream is synthetically generated from the noisy speech input. We demonstrate that the robustness and the accuracy boost obtained from our model lead to various real-world applications which were previously not possible. In the second part, we explore an interesting question of what can be obtained from an 8 × 8 pixel video sequence by utilizing the corresponding speech of the person talking. Surprisingly, it turns out to be quite a lot. We show that when processed with the right set of audio and image priors, we can obtain a full-length talking video sequence with a 32× scale-factor. When the semantic information about the identity, including basic attributes like age and gender, are almost entirely lost in the input low-resolution video, we show that utilizing the speech that accompanies the low-resolution video aids in recovering the key face attributes. Our proposed audio-visual upsampling network generates realistic, accurate, and high-resolution (256 × 256 pixels) talking-face videos from an 8 × 8 input video. Finally, we demonstrate that our model can be utilized in video conferencing applications where the network bandwidth consumption can be drastically reduced. We hope that our work in cross-modal content recovery enables exciting applications such as smoother video calling, accessibility of video content in low-bandwidth situations, and restoring old historical videos. Our work can also pave the way for future research avenues for cross-modal enhancement of talking-face videos

    Year of completion:  June 2022
     Advisor : C V Jawahar,Vinay P Namboodiri

    Related Publications


      Downloads

      thesis

      Computer-Aided diagnosis of closely related diseases


      Abhinav Dhere

      Abstract

      It is often observed that certain human diseases exhibit similarities in some form while having different prognoses and requiring treatment strategies. These similarities may be in the form of risk factors towards the diseases, symptoms observed, visual similarity in imaging studies, or in some cases, similarity in molecular associations. Computer-Aided Diagnosis (CAD) of closely related diseases is challenging and requires tailored approaches to discriminate between such closely related diseases accurately. This thesis looks at two sets of closely related diseases of two different organs, identified from two different modalities. It develops novel approaches to achieve explainable and accurate CAD for these two close diseases. These two problems are discrimination of healthy, mild cognitive impairment (MCI) and Alzheimer’s Disease (AD) from brain MRI-derived surface mesh and classifying healthy, non-COVID pneumonia and COVID from chest X-ray images. In the first part of this thesis, we present a novel 2D image representation for the brain mesh surface, called a height map. Further, we explore the use of height maps towards the hierarchical classification of healthy, MCI, and AD cases. We also compare different strategies of extracting features and regions of interest from height maps and their performance towards healthy vs. MCI vs. AD classification. We demonstrate that the proposed method achieves fast classification of AD and MCI with minor loss of accuracy compared to state of the art. In the second half of this thesis, we present a novel deep learning architecture called Multi-scale Attention Learning Residual Learning (MARL) and a new conicity loss for training the MARL architecture. We utilize MARL and the conicity loss for achieving hierarchical classification of normal, non-COVID pneumonia and COVID pneumonia from Chest X-ray images. We present classification results on three public datasets and demonstrate that the proposed method achieves comparable or marginally better performance than state-of-the-art in all cases. Further, we demonstrate that the proposed framework achieves clinically consistent explanations with extensive experimentation. Qualitatively, this is shown by comparing GradCAM heatmaps for the proposed method to those for the state-of-the-art method. It is observed that the heatmaps overlap better with the bounding boxes for pneumonia marked by experts compared to the overlap achieved by the state-of-the-art method. Next, we show quantitatively that the GradCAM heatmaps for the proposed method generally lie within inner regions of the lung for nonCOVID pneumonia. However, the same heatmaps lie in outer regions in the case of COVID pneumonia. Thus, we establish the clinical consistency of explanations provided by the proposed framework.

      Year of completion:  June 2022
       Advisor : Jayanthi Sivaswamy

      Related Publications


        Downloads

        thesis

        Deep Learning for Assisting Annotation and Studying Relationship between Cancers using Histopathology Whole Slide Images


        Ashish Menon

        Abstract

        Cancer has been the leading cause of death across the globe and there has been a constant push and research from the scientific community as a whole towards assisting its diagnosis. Particularly the last decade has seen widespread use of computer vision and AI towards tackling cancer diagnosis using both radiological (non-invasive, ex: X-ray, CT ) and pathological modalities (invasive ex: histopathology). A histopathologic Whole Slide Image (WSI) represents digitized image of tissue sample characterized by a large size of up to 109 pixels at maximum resolution. It is considered the gold standard for cancer diagnosis. The routine diagnosis involves experts called pathologists to analyse a slide containing the tissue samples under a microscope. The process is often subjective to cognitive load and the diagnosis is prone to inter and intra pathologist errors. With the digitization of tissue samples as WSI, computerassisted diagnosis would be impactful to address the above issues especially with the advent of deep learning. For this we require models to be trained with large amounts of annotated data as well as understanding the cancer manifestation across organs. In this thesis work, we address two major issues with the help of deep learning techniques. (1) assisting the whole slide image annotation using an expert in the loop and (2) understanding the relationship between cancers and bring to light commonalities of cancer patterns between certain pairs of organs. A typical process of slide diagnosis under a microscope by experts involves exhaustive analysis in scanning across the slides, searching for anomalous/tumorous regions present. Owing to the large dimensions of the histopathology WSI, visually searching for clinically significant regions (patches) is a tedious task for a medical expert. Sequential analysis of several such images further increases the workload resulting in poor diagnosis. A major impediment to automate this task using a deep learning model is the requirement of large amounts of annotated data of WSI patches which is a laborious process and involves exhaustive search for anomalous regions. To tackle this issue, the first part of the thesis work proposes a novel CNN-based, expert feedback-driven interactive learning technique to mitigate this issue. The proposed method seeks to acquire labels of the most informative patches in small increments with multiple feedback rounds to maximize the throughput. It requires the expert to query a patch of interest from a slide and provide feedback to a set of unlabelled patches chosen using the proposed sampling strategy from a ranked list. The proposed technique is applied in a setting that assumes there exists a large cohort of unannotated slides, almost eliminating the need of annotated data upfront; instead learns with an expert involvement. We discuss several strategies for sampling the right set of samples to be labelled by the expert to minimise the expert feedback and maximise the throughput. Theproposed technique can also annotate multiple slides parallelly using a single slide under review (used to query anomalous patches), which in turn reduces the effort of annotation. The Cancer Genome Atlas (TCGA) contains large repositories of histopathology whole slide images spanning across several organs and subtypes. However, not much work has gone into analysis of all the organs and subtypes and their similarities. Our work attempts to bridge this gap by training deep learning models to classify cancer vs normal patches for 11 subtypes spanning 7 organs (9792 tissue slides) to achieve near-perfect classification performance. We used these models to investigate their performances in the test set of other organs (cross organ inference). We found that every model had a good cross organ inference accuracy when tested on breast, colorectal and liver cancers. Further, a high accuracy is observed between models trained on the cancer subtypes originating from same organ (kidney and lung). We also validated these performances by showing the separability of cancer and normal samples in a high dimensional feature space. We further hypothesized that the high cross organ inferences are due to shared tumor morphologies among organs. We validated the hypothesis by showing the overlap in the Gradient-weighted Class Activation Mapping (GradCAM) visualizations and similarities in the distributions of nuclei geometrical features present within the high attention regions.

        Year of completion:  June 2022
         Advisor : C V Jawahar,Vinod P K

        Related Publications


          Downloads

          thesis

          Bounds on Generalization Error of Variational Lower Bound Estimatesof Mutual Information and Applications of Mutual Information


          P Aditya Sreekar Sreekar

          Abstract

          Machine learning techniques have found widespread usage in critical systems in the recent past.This rise of ML was mainly fueled by Neural Networks, which enabled end-to-end systems to use.This is possible because of the feature learning and universal approximator properties of NNs. Recentresearch has extended the application of NNs to the estimation of mutual information (MI) from samples.They construct a critic using a NN and maximize a variation lower bound of MI. These methods havealleviated many problems faced by previous MI estimation methods like the inability to scale andincompatibility with mini-batch-based optimization. While these estimation methods are reliable whenthe true MI is low, they tend to produce high variance estimates when the true MI is high. We arguethat the high variance characteristics are due to the uncontrolled complexity of the critic’s hypothesisspace. In support of this argument, we use the data-drivenRademacher complexityof the hypothesisspace associated with the critic’s architecture to derive upper bounds on thegeneralization errorofvariational lower bound based estimates of MI. Using the derived bounds, we show that Rademachercomplexity of the critic’s hypothesis is a major contributor to the generalization error. Inspired by thiswe propose to negate the high variance characteristics of these estimators by constraining the critic’shypothesis space to aReproducing Kernel Hilbert Space(RKHS), which corresponds to a kernel learnedusingAutomated Spectral Kernel Learning(ASKL). Further, we propose two regularisation terms byexamining the upper bound on the Rademacher complexity of RKHS learned by ASKL. The weightsof these regularization terms can be varied for a bias-variance tradeoff. We empirically demonstratethe efficacy of this regularization in enforcing proper bias-variance tradeoff on four different variationallower bounds of MI, namely NWJ, MINE, JS and SMILE.We also propose a novel application of MI estimation using variational lower bounds and NNs inlearning disentangled representations of videos. The learned representations contain two sets of features,content and pose representations which disjointly represent the content and movement in the video. Thisis achieved by minimizing the mutual information between pose representation of different frames from the same video, thus in effect forcing the content representation to capture information common betweendifferent frames. We qualitatively and quantitatively compare our method against DRNET [24] on twosynthetic and one real-world dataset.

          Year of completion:  June 2022
           Advisor : Anoop M Namboodiri

          Related Publications


            Downloads

            thesis

            More Articles …

            1. Parallelizing Modules to Optimize and Enhance Fingerprint Recognition Systems
            2. Scene Text Recognition for Indian Languages
            3. Summarizing Day Long Egocentric Videos
            4. Counting in the 2020s: Binned Representations and Inclusive Performance Measures for Deep Crowd Counting Approaches
            • Start
            • Prev
            • 12
            • 13
            • 14
            • 15
            • 16
            • 17
            • 18
            • 19
            • 20
            • 21
            • Next
            • End
            1. You are here:  
            2. Home
            3. Research
            4. MS Thesis
            5. Thesis Students
            Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.