Sparse Input View Synthesis: 3D Representations and Reliable Priors
Abstract
Novel view synthesis refers to the problem of synthesizing novel viewpoints of a scene given the images from a few viewpoints. This is a fundamental problem in computer vision and graphics, and enables a vast variety of applications such as meta-verse, free- view watching of events, video gaming, video stabilization and video compression. Recent 3D representations such as radiance fields and multi-plane images significantly improve the quality of images rendered from novel viewpoints. However, these models require a dense sampling of input views for high quality renders. Their performance goes down significantly when only a few input views are available. In this thesis, we focus on the sparse input novel view synthesis problem for both static and dynamic scenes.
In the first part of this work, we mainly focus on sparse input novel view synthesis of static scenes using neural radiance fields (NeRF). We study the design of reliable and dense priors to better regularize the NeRF in such situations. In particular, we propose a prior on the visibility of the pixels in a pair of input views. We show that this visibility prior, which is related to the relative depth of objects, is dense and more reliable than existing priors on absolute depth. We compute the visibility prior using plane sweep volumes without the need to train a neural network on large datasets. We evaluate our approach on multiple datasets and show that our model outperforms existing approaches for sparse input novel view synthesis.
In the second part, we aim to further improve the regularization by learning a scene- specific prior that does not suffer from generalization issues. We achieve this by learning the prior on the given scene alone without pre-training on large datasets. In particular, we design augmented NeRFs to obtain better depth supervision in certain regions of the scene for the main NeRF. Further, we extend this framework to also apply to newer and faster radiance field models such as TensoRF and ZipNeRF. Through extensive experiments on multiple datasets, we show the superiority of our approach in sparse input novel view synthesis.
The design of sparse input fast dynamic radiance fields is severely constrained by the lack of suitable representations and reliable priors for motion. We address the first challenge by designing an explicit motion model based on factorized volumes that is compact and optimizes quickly. We also introduce reliable sparse flow priors to constrain the motion field, since we find that the popularly employed dense optical flow priors are unreliable. We show the benefits of our motion representation and reliable priors on multiple datasets.
In the final part of this thesis, we study the application of view synthesis for frame rate upsampling in video gaming. Specifically, we consider the problem of temporal view synthesis, where the goal is to predict the future frames given the past frames and the camera motion. The key challenge here is in predicting the future motion of the objects by estimating their past motion and extrapolating it. We explore the use of multi-plane image representations and scene depth to reliably estimate the object motion, particularly in the occluded regions. We design a new database to effectively evaluate our approach for temporal view synthesis of dynamic scenes and show that we achieve state-of-the-art performance.