Towards Label Free Few Shot Learning : How Far Can We Go?
Aditya Bharti
Abstract
Deep learning frameworks have consistently pushed the state-of-the-art limit across various problem domains such as computer vision, and natural language processing applications. Such performance improvements have only been made possible by the increasing availability of labeled data and computational resources, which makes applying such systems to low data regimes extremely challenging. Computationally simple systems which are effective with limited data are essential to the continued proliferation of DNNs to more problem spaces. In addition, generalizing from limited data is a crucial step toward more human-like machine intelligence. Reducing the label requirement is an active and worthwhile area of research since getting large amounts of high-quality annotated data is labor intensive and often impossible, depending on the domain. There are various approaches to this: artificial generation of extra labeled data, using existing information (other than labels) as supervisory signals for training, and designing pipeline that specifically learn using only a few samples. We focus our efforts on that last class of channels which aims to learn from limited labeled data, also known as Few Shot Learning. Few-Shot learning systems aim to generalize to novel classes given very few novel examples, usually one to five. Conventional few-shot pipelines use labeled data from the training set to guide training, then aim to generalize to the novel classes which have limited samples. However, such approaches only shift the label requirement from the novel to the training dataset. In low data regimes, where there is a dearth of labeled data, it may not be possible to get enough training samples. Our work aims to alleviate this label requirement by using no labels during training. We examine how much performance is achievable using extremely simple pipelines overall. Our contributions are hence twofold. (i) We present a more challenging label-free few-shot learning setup and examine how much performance can be squeezed out of a system without labels. (ii) We propose a computationally and conceptually simple pipeline to tackle this setting. We tackle both the compute and data requirements by leveraging self-supervision for training and image similarity for testing.
Year of completion: | January 2024 |
Advisor : | C V Jawahar,Vineeth Balasubramanian |
Related Publications
Downloads