dc.contributor.advisor | Soundararajan, Rajiv | |
dc.contributor.author | Choudhary, Kapil | |
dc.date.accessioned | 2025-07-01T06:17:53Z | |
dc.date.available | 2025-07-01T06:17:53Z | |
dc.date.submitted | 2025 | |
dc.identifier.uri | https://etd.iisc.ac.in/handle/2005/6977 | |
dc.description.abstract | Novel view synthesis involves generating unseen perspectives of a scene based on videos
captured from limited viewpoints. It is an interesting problem in computer graphics and
computer vision, with many applications such as virtual and augmented reality (AR),
film production, autonomous driving, and sports streaming. Methods to model static
radiance fields, such as neural radiance fields and 3D Gaussian splatting, have achieved
remarkable results in synthesizing photo-realistic rendering of novel views. However,
learning scene representations of a dynamic scene introduces several challenges in modeling
the motion in the scene. Further, existing models require dense viewpoints to
generate good-quality rendering. The performance of these models goes down significantly
as we reduce the number of viewpoints. This thesis focuses on the problem of
dynamic view synthesis for sparse input views.
In the first part of this thesis, we focus on studying reliable and dense flow priors,
to constrain the motion in dynamic radiance fields. We propose an efficient selection
of dense flow priors, as naively obtaining dense flow leads to unreliable priors. In the
second part of this thesis, we study the challenges introduced by volumetric motion modeling.
Specifically, we address the limitations of unidirectional motion models, leading to
many-to-one mapping of points. We enforce cyclic motion consistency with the help of
bidirectional motion fields to achieve superior reconstruction of novel views of dynamic
scenes. Further, the design of the bi-directional motion field allows us to track object
motion in synthesized views. | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartofseries | ;ET00987 | |
dc.rights | I grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part
of this thesis or dissertation | en_US |
dc.subject | computer graphics | en_US |
dc.subject | computer vision | en_US |
dc.subject | augmented reality | en_US |
dc.subject | virtual reality | en_US |
dc.subject | dynamic view synthesis | en_US |
dc.subject | sparse input views | en_US |
dc.subject | Motion models | en_US |
dc.subject | bi-directional motion field | en_US |
dc.subject | Novel view synthesis | en_US |
dc.subject.classification | Research Subject Categories::TECHNOLOGY::Electrical engineering, electronics and photonics::Electronics | en_US |
dc.title | Sparse Input Novel View Synthesis of Dynamic Scenes | en_US |
dc.type | Thesis | en_US |
dc.degree.name | MTech (Res) | en_US |
dc.degree.level | Masters | en_US |
dc.degree.grantor | Indian Institute of Science | en_US |
dc.degree.discipline | Engineering | en_US |