Show simple item record

dc.contributor.advisorSoundararajan, Rajiv
dc.contributor.authorNagabhushan, S N
dc.date.accessioned2024-09-30T04:55:01Z
dc.date.available2024-09-30T04:55:01Z
dc.date.submitted2024
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/6640
dc.description.abstractNovel view synthesis refers to the problem of synthesizing novel viewpoints of a scene given the images from a few viewpoints. This is a fundamental problem in computer vision and graphics, and enables a vast variety of applications such as meta-verse, free- view watching of events, video gaming, video stabilization and video compression. Recent 3D representations such as radiance fields and multi-plane images significantly improve the quality of images rendered from novel viewpoints. However, these models require a dense sampling of input views for high quality renders. Their performance goes down significantly when only a few input views are available. In this thesis, we focus on the sparse input novel view synthesis problem for both static and dynamic scenes. In the first part of this work, we mainly focus on sparse input novel view synthesis of static scenes using neural radiance fields (NeRF). We study the design of reliable and dense priors to better regularize the NeRF in such situations. In particular, we propose a prior on the visibility of the pixels in a pair of input views. We show that this visibility prior, which is related to the relative depth of objects, is dense and more reliable than existing priors on absolute depth. We compute the visibility prior using plane sweep volumes without the need to train a neural network on large datasets. We evaluate our approach on multiple datasets and show that our model outperforms existing approaches for sparse input novel view synthesis. In the second part, we aim to further improve the regularization by learning a scene- specific prior that does not suffer from generalization issues. We achieve this by learning the prior on the given scene alone without pre-training on large datasets. In particular, we design augmented NeRFs to obtain better depth supervision in certain regions of the scene for the main NeRF. Further, we extend this framework to also apply to newer and faster radiance field models such as TensoRF and ZipNeRF. Through extensive experiments on multiple datasets, we show the superiority of our approach in sparse input novel view synthesis. The design of sparse input fast dynamic radiance fields is severely constrained by the lack of suitable representations and reliable priors for motion. We address the first challenge by designing an explicit motion model based on factorized volumes that is compact and optimizes quickly. We also introduce reliable sparse flow priors to constrain the motion field, since we find that the popularly employed dense optical flow priors are unreliable. We show the benefits of our motion representation and reliable priors on multiple datasets. In the final part of this thesis, we study the application of view synthesis for frame rate upsampling in video gaming. Specifically, we consider the problem of temporal view synthesis, where the goal is to predict the future frames given the past frames and the camera motion. The key challenge here is in predicting the future motion of the objects by estimating their past motion and extrapolating it. We explore the use of multi-plane image representations and scene depth to reliably estimate the object motion, particularly in the occluded regions. We design a new database to effectively evaluate our approach for temporal view synthesis of dynamic scenes and show that we achieve state-of-the-art performance.en_US
dc.description.sponsorshipMinistry of Education, Qualcommen_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET00650
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectComputer Visionen_US
dc.subjectComputer Graphicsen_US
dc.subjectMachine Learningen_US
dc.subjectDeep Learningen_US
dc.subjectRadiance Fieldsen_US
dc.subject3D Representationsen_US
dc.subjectnovel view synthesisen_US
dc.subjecttemporal view synthesisen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer science::Computer scienceen_US
dc.titleSparse Input View Synthesis: 3D Representations and Reliable Priorsen_US
dc.typeThesisen_US
dc.degree.namePhDen_US
dc.degree.levelDoctoralen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record