Exploring Data Driven Graphics for Interactivity

Aakash KT


The goal of rendering in computer graphics is the synthesis of photorealistic images from abstract virtual scene descriptions. Among the many rendering methods, path tracing along with physically accurate material modeling has emerged as the industry standard for photorealistic image synthesis. Path tracing is, however a computationally expensive operation. Moreover, physically accurate material modeling requires iterative parameter tuning and visualization using path tracing. This quickly becomes a bottleneck for the tuning of a large number of parameters found in most physically accurate material models, and artists end up spending a lot of time waiting for path tracing to converge. This dissertation proposes to leverage the learning power of neural networks to plausibly approximate path tracing for: (a) quick material visualization while editing, (b) out-of-the box material recovery using inverse rendering. The traditional workflow for tuning material parameters and visualization is typically followed by modifications to lighting for better understanding the behaviour of the material. We thus incorporate the ability to modify the environment lighting in our neural material visualization framework. This significantly aids in understanding the behaviour of the material thus aiding the process of parameter tuning, which we demonstrate with a user study. Most real-world materials are spatially varying, which is that their behaviour varies across the surface of the geometry. While previous works in this area dealt with uniform materials, we propose improvements for handling spatially varying materials. We design our neural network architecture to be light-weight, having 10× lesser trainable parameters than the state-of-the-art, while also producing better visualizations. We also build a tool to demonstrate real-time parameter tuning and visualization of materials. Finally, we extend the above work to handle general 3D scenes in specific lighting scenarios. This is done by training a small fully connected neural network on a simple scene in the local tangent frame. Training in the local tangent frame decouples geometric complexity from training which implies that the network generalizes to any geometry. We embed this network in a standard path tracer to evaluate direct illumination at each path vertex. We thus not only obtain noise free direct illumination but also out-of-the box gradients of the rendering process, which can be used for inverse rendering. We show that these gradients can be used for plausibly recovering the material of a target scene. We conclude by discussing various practical aspects of our method.

Year of completion:  April 2022
 Advisor : P J Narayanan

Related Publications