Revealing Disocclusions in Temporal View Synthesis
Abstract
We consider the problem of temporal view synthesis, where the goal is to predict a future video frame from the past frames using knowledge of the depth and relative camera motion. The problem has applications in frame rate upsampling for virtual reality and gaming in low compute devices. One of the major challenges in predicting the future frames is infilling the regions that get disoccluded or uncovered owing to the camera motion. In contrast to revealing the disoccluded regions through intensity based infilling, we study the idea of an infilling vector to infill by pointing to a non-disoccluded region in the synthesized view. To exploit the structure of disocclusions created by camera motion during their infilling, we rely on two important cues, temporal correlation of infilling directions and depth. We design a learning framework to predict the infilling vector by computing a temporal prior that reflects past infilling directions and a normalized depth map as input to the network. We conduct extensive experiments on a large scale dataset we build for evaluating temporal view synthesis in addition to the SceneNet RGB-D dataset. Our experiments demonstrate that our infilling vector prediction approach achieves superior quantitative and qualitative infilling performance compared to other approaches in literature. Finally, we also study infilling vector prediction and temporal view synthesis when the scene depth may not be available.
We show that the estimated depth can be effectively used for frame warping and disocclusion infilling to predict the next frame.