Show simple item record

dc.contributor.advisorSoundararajan, Rajiv
dc.contributor.authorChoudhary, Kapil
dc.date.accessioned2025-07-01T06:17:53Z
dc.date.available2025-07-01T06:17:53Z
dc.date.submitted2025
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/6977
dc.description.abstractNovel view synthesis involves generating unseen perspectives of a scene based on videos captured from limited viewpoints. It is an interesting problem in computer graphics and computer vision, with many applications such as virtual and augmented reality (AR), film production, autonomous driving, and sports streaming. Methods to model static radiance fields, such as neural radiance fields and 3D Gaussian splatting, have achieved remarkable results in synthesizing photo-realistic rendering of novel views. However, learning scene representations of a dynamic scene introduces several challenges in modeling the motion in the scene. Further, existing models require dense viewpoints to generate good-quality rendering. The performance of these models goes down significantly as we reduce the number of viewpoints. This thesis focuses on the problem of dynamic view synthesis for sparse input views. In the first part of this thesis, we focus on studying reliable and dense flow priors, to constrain the motion in dynamic radiance fields. We propose an efficient selection of dense flow priors, as naively obtaining dense flow leads to unreliable priors. In the second part of this thesis, we study the challenges introduced by volumetric motion modeling. Specifically, we address the limitations of unidirectional motion models, leading to many-to-one mapping of points. We enforce cyclic motion consistency with the help of bidirectional motion fields to achieve superior reconstruction of novel views of dynamic scenes. Further, the design of the bi-directional motion field allows us to track object motion in synthesized views.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET00987
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectcomputer graphicsen_US
dc.subjectcomputer visionen_US
dc.subjectaugmented realityen_US
dc.subjectvirtual realityen_US
dc.subjectdynamic view synthesisen_US
dc.subjectsparse input viewsen_US
dc.subjectMotion modelsen_US
dc.subjectbi-directional motion fielden_US
dc.subjectNovel view synthesisen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Electrical engineering, electronics and photonics::Electronicsen_US
dc.titleSparse Input Novel View Synthesis of Dynamic Scenesen_US
dc.typeThesisen_US
dc.degree.nameMTech (Res)en_US
dc.degree.levelMastersen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record