Show simple item record

dc.contributor.advisorSoundararajan, Rajiv
dc.contributor.authorVijayalakshmi, K
dc.date.accessioned2022-03-04T04:36:14Z
dc.date.available2022-03-04T04:36:14Z
dc.date.submitted2021
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/5650
dc.description.abstractWe consider the problem of temporal view synthesis, where the goal is to predict a future video frame from the past frames using knowledge of the depth and relative camera motion. The problem has applications in frame rate upsampling for virtual reality and gaming in low compute devices. One of the major challenges in predicting the future frames is infilling the regions that get disoccluded or uncovered owing to the camera motion. In contrast to revealing the disoccluded regions through intensity based infilling, we study the idea of an infilling vector to infill by pointing to a non-disoccluded region in the synthesized view. To exploit the structure of disocclusions created by camera motion during their infilling, we rely on two important cues, temporal correlation of infilling directions and depth. We design a learning framework to predict the infilling vector by computing a temporal prior that reflects past infilling directions and a normalized depth map as input to the network. We conduct extensive experiments on a large scale dataset we build for evaluating temporal view synthesis in addition to the SceneNet RGB-D dataset. Our experiments demonstrate that our infilling vector prediction approach achieves superior quantitative and qualitative infilling performance compared to other approaches in literature. Finally, we also study infilling vector prediction and temporal view synthesis when the scene depth may not be available. We show that the estimated depth can be effectively used for frame warping and disocclusion infilling to predict the next frame.en_US
dc.language.isoen_USen_US
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectView synthesisen_US
dc.subjectDisocclusion infillingen_US
dc.subjectSplattingen_US
dc.subjectvideo frameen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Electrical engineering, electronics and photonicsen_US
dc.titleRevealing Disocclusions in Temporal View Synthesisen_US
dc.typeThesisen_US
dc.degree.nameMTech (Res)en_US
dc.degree.levelMastersen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record