Building Maximal Vision Systems with Minimal Resources

 

Abstract:

Current vision and robotic systems are like mainframe machines of the 60s -- they require extensive resources: (1) dense data capture and massive human annotations, (2) large parametric models, and (3) intensive computational infrastructure. I build systems that can learn directly from sparse and unconstrained real-world samples with minimal resources, i.e., limited or no supervision, use simple and efficient models, and operate on every day computational devices. Building systems with minimal resources allows us to democratize them for non-experts. My work has impacted important areas such as virtual reality, content creation and audio-visual editing, and providing a natural voice to speech-impaired individuals.

In my talk, I will present my efforts to build vision systems for novel view synthesis. I will discuss Neural Pixel Composition, a novel approach for continuous 3D-4D view synthesis that reliably operates on sparse and wide-baseline multi-view images/videos and can be trained efficiently within a few minutes for high-resolution (12MP) content using 1 GB GPU memory. I will present my efforts to build vision systems for unsupervised audio-visual synthesis. I will primarily discuss Exemplar Autoencoders that enable zero-shot audio-visual retargeting. Exemplar Autoencoders are built on remarkably simple insights: (1) autoencoders project out-of-sample data onto the distribution of the training set; and (2) exemplar learning enables us to capture the voice, stylistic prosody (emotions and ambiance), and visual appearance of the target. These properties enable an autoencoder trained on an individual's voice to generalize for unknown voices in different languages. Exemplar Autoencoders can synthesize natural voices for speech-impaired individuals and do a zero-shot multilingual translation.

Bio:

Aayush Bansal is currently a short-term research scientist at the Reality Labs Research of Meta Platforms, Inc. He received his Ph.D. in Robotics from Carnegie Mellon University under the supervision of Prof. Deva Ramanan and Prof. Yaser Sheikh. He was a Presidential Fellow at CMU, and a recipient of the Uber Presidential Fellowship (2016-17), Qualcomm Fellowship (2017-18), and Snap Fellowship (2019-20). His research has been covered by various national and international media such as NBC, CBS, WQED, 90.5 WESA FM, France TV, and Journalist. He has also worked with production houses such as BBC Studios, Full Frontal with Samantha Bee (TBS), etc. More details are available on his webpage: https://www.aayushbansal.xyz/