Learning without exhaustive supervision
Abstract:
Recent progress in visual recognition can be attributed to large datasets and high capacity learning models. Sadly, these data hungry models tend to be supervision hungry as well. In my research, I focus on algorithms that learn from large amounts of data without exhaustive supervision. The key approach to make algorithms "supervision efficient" is to exploit the structure or prior properties available in the data or labels, and model it in the learning algorithm. In this talk, I will focus on three axes to get around having exhaustive annotations: 1) finding structure and natural supervision in data to reduce the need for manual labels: unsupervised and semi-supervised learning from video [ECCV'16, CVPR'15]; 2) sharing information across tasks so that tasks which are easier or "free" to label can help other tasks [CVPR'16]; and 3) finding structure in the labels and the labeling process so that one can utilize labels in the wild [CVPR'16].
Bio:
Ishan Misra is a PhD student at Carnegie Mellon University, working with Martial Hebert and Abhinav Gupta. His research interests are in Computer Vision and Machine Learning, particularly in visual recognition. Ishan got his BTech in Computer Science from IIIT-Hyderabad where he worked with PJ Narayanan. He got the Siebel fellowship in 2014, and has spent two summers as an intern at Microsoft Research, Redmond.
