Tutorial I
Title: Computer Vision and Navigation
Sub Title: Learning-based techniques for visually-guided robotic manipulation and navigation

saurabh Saurabh Gupta
University of Illinois at Urbana–Champaign
18th December, 2018

Bio: Saurabh Gupta will be starting as an Assistant Professor at UIUC from Fall 2019. In the meanwhile, he is spending time as a Research Scientist at Facebook AI Research in Pittsburgh working with Prof. Abhinav Gupta. Earlier, he was a Computer Science graduate student at UC Berkeley, where he was advised by Prof. Jitendra Malik. Even earlier, he was an under graduate at IIT Delhi, where he majored in Computer Science and Engineering.

Abstract:In this tutorial, I will talk about recent advances in learning-based techniques for visually-guided robotic manipulation and navigation. The first part of the tutorial will present a general overview of different aspects: a) how to formulate robotic tasks as learning problems, b) how to gather data for such tasks, c) how to improve sample efficiency of algorithms for learning models. The second part of the tutorial will summarize recent papers on visual navigation to demonstrate how the presented ideas are used in practice.

Sub Title: Challenges and Advances in Vision-Based Self-Driving

Manmohan Manmohan Chandraker
University of California San Diego
18th December, 2018

Bio: Manmohan Chandraker is an assistant professor at the CSE department of the University of California, San Diego and heads computer vision research at NEC Labs America. He received a PhD from UCSD and was a postdoctoral scholar at UC Berkeley. His research interests are in computer vision, machine learning and graphics-based vision, with applications to autonomous driving and human-computer interfaces. His works have received the Marr Prize Honorable Mention for Best Paper at ICCV 2007, the 2009 CSE Dissertation Award for Best Thesis at UCSD, a PAMI special issue on best papers of CVPR 2011, the Best Paper Award at CVPR 2014, the 2018 NSF CAREER Award and the 2018 Google Daydream Research Award. He has served as an Area Chair at CVPR, ICCV, ICVGIP and AAAI.

Abstract:Modern autonomous navigation systems rely on a range of sensors including radar, ultrasound, cameras and LIDAR. Active sensors such as radar are primarily used for detecting traffic participants (TPs) and measuring their distance. More expensive LIDAR are used for estimating both traffic participants and scene elements (SEs). However, camera-based systems have the potential to achieve the same capabilities at a much lower cost, while allowing new ones such as determination of TP and SE types as well as their interactions in complex traffic scenes.

This tutorial will cover several technical challenges and advances for vision-based autonomous driving. Distance estimation is a key challenge when limited to passive sensors like cameras, which we overcome with deep metric learning based approaches that yield highly accurate measurements. Object detection needs to be lightweight yet highly accurate, for which we present a knowledge distillation framework. Driving decisions are also influenced by fine details of object parts (such as whether the door of a parked car is open), which we obtain through a deep supervision framework that is trained purely on synthetic data and reasons about even invisible parts. To achieve scalable outcomes across diverse weather, lighting and geographies without the burden of additional data labeling, we propose novel unsupervised domain adaptation methods for semantic segmentation. Human drivers are adept at reasoning about occlusions, which is an ability we mimic through a deep learning framework that estimates the entire scene layout including regions hidden behind foreground objects. Safe driving requires long-term prediction of diverse future paths given the same past while accounting for interactions, which we achieve through novel generative path prediction methods that yield both high precision and recall.

Tutorial II
Title: Implementing DCNNs

Carlos Carlos Castillo
University of Maryland Institute
21st December, 2018

Bio: Carlos D. Castillo (Ph.D., UMD, 2012) is a research scientist at the University of Maryland Institute for Advanced Computer Studies (UMIACS). Prior to that, he pursued a Ph.D. under the supervision of Dr. David Jacobs. He was recipient of the best paper award at the International Conference on Biometrics: Theory, Applications and Systems (BTAS) 2016. His current research interests include face detection and recognition, stereo matching and deep learning. His technologies provided the technical foundations to a startup company: Mukh Technologies LLC which creates software for face detection, alignment and recognition.

Abstract: This will be a tutorial about deep convolutional neural networks (DCNNs) in computer vision. The core concepts of deep learning and back propagation will be covered. It will be a hands on (with a GPU equipped laptop), tutorial on DCNNs focusing on core computer vision tasks. It will be taught on PyTorch. The following topics will be covered: (1) AlexNet, VGG and ResNets, (2) Classification and regression tasks and examples, (3) Detection tasks, (4) Video activity detection and (5) Segmentation, edge detection and other pixel to pixel tasks.

Tutorial III
Title: Matlab for CV and ML

About the Tutorial: Researchers from Mathworks will give a detailed overview of implementing and deploying machine learning solutions using Matlab.