INR-V: A Continuous Representation Space for Video-based Generative Tasks


Bipasha Sen*1 Aditya Agarwal*1, Vinay P Namboodiri2 and C.V. Jawahar

1IIIT Hyderabad, India

2University of Bath, UK

* indicates equal contribution

TMLR, 2022

[ Paper ]   | [ Video ] | [ Inference Code ] | [ OpenReview ]

 

 

 

 

 

 

Abstract

banner. V18

Generating videos is a complex task that is accomplished by generating a set of temporally coherent images frame-by-frame. This limits the expressivity of videos to only image-based operations on the individual video frames needing network designs to obtain temporally coherent trajectories in the underlying image space. We propose INR-V, a video representation network that learns a continuous space for video-based generative tasks. INR-V parameterizes videos using implicit neural representations (INRs), a multi-layered perceptron that predicts an RGB value for each input pixel location of the video. The INR is predicted using a meta-network which is a hypernetwork trained on neural representations of multiple video instances. Later, the meta-network can be sampled to generate diverse novel videos enabling many downstream video-based generative tasks. Interestingly, we find that conditional regularization and progressive weight initialization play a crucial role in obtaining INR-V. The representation space learned by INR-V is more expressive than an image space showcasing many interesting properties not possible with the existing works. For instance, INR-V can smoothly interpolate intermediate videos between known video instances (such as intermediate identities, expressions, and poses in face videos). It can also in-paint missing portions in videos to recover temporally coherent full videos. In this work, we evaluate the space learned by INR-V on diverse generative tasks such as video interpolation, novel video generation, video inversion, and video inpainting against the existing baselines. INR-V significantly outperforms the baselines on several of these demonstrated tasks, clearly showing the potential of the proposed representation space.
 

Overview

architecture overview

We parameterize videos as a function of space and time using implicit neural representations (INRs). Any point in a video Vhwt can be represented by a function fΘ→ RGBhwt where t denotes the tth frame in the video and h, w denote the spatial location in the frame, and RGB denotes the color at the pixel position {h, w, t}. Subsequently, the dynamic dimension of videos (a few million pixels) is reduced to a constant number of weights Θ (a few thousand) required for the parameterization. A network can then be used to learn a prior over videos in this parameterized space. This can be obtained through a meta-network that learns a function to map from a latent space to a reduced parameter space that maps to a video. A complete video is thus represented as a single latent point. We use a meta-network called hypernetworks that learns a continuous function over the INRs by getting trained on multiple video instances using a distance loss. However, hypernetworks are notoriously unstable to train, especially on the parameterization of highly expressive signals like videos. Thus, we propose key prior regularization and a progressive weight initialization scheme to stabilize the hypernetwork training allowing it to scale quickly to more than 30,000 videos. The learned prior enables several downstream tasks such as novel video generation, video inversion, future segment prediction, video inpainting, and smooth video interpolation directly at the video level.

 

comp1

 comp2

inv

inpainting

inversion other

superresolve

add1

 

Additional Interpolation Results

 grid 0

grid 1

grid 2

grid 3

grid 4

grid 5

grid 6

grid 7

grid 8

 

Citation

@article{ sen2022inrv,
   title={ {INR}-V: A Continuous Representation Space for Video-based Generative Tasks},
   author={Bipasha Sen and Aditya Agarwal and Vinay P Namboodiri and C.V. Jawahar},
   journal={Transactions on Machine Learning Research},
   year={2022},
   url={https://openreview.net/forum?id=aIoEkwc2oB},
   note={}
} 

 

This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. It is borrowing the source code of this website.