Speaker : Vijayalakshmi K

Affiliation: M.Tech. (Res), ECE Dept, IISc Bangalore

Date and Time : September 24, 2021 (Friday), 5:45 PM to 6:30 PM

Talk Recording YouTube Link: https://youtu.be/qG0x-eeg79I

Abstract : Suppose a user is exploring a virtual environment on a head mounted display device. Given the past view of a rendered video frame and the most recent head position, our goal is to directly predict a future video frame without having to graphically render it. We refer to the above problem as egomotion-aware temporal view synthesis. The biggest challenge in such view synthesis through warping is the disocclusion of the background. In contrast to revealing the disoccluded regions through intensity-based infilling, we study the idea of an infilling vector to infill by pointing to a non-disoccluded region in the synthesized view. To exploit the structure of disocclusions created by camera motion during their infilling, we rely on two important cues, temporal correlation of infilling directions and depth. Our extensive experiments on a large-scale dataset that we build for evaluating temporal view synthesis, in addition to datasets in literature, demonstrate that our infilling vector prediction approach achieves superior quantitative and qualitative infilling performance compared to other approaches in literature.

Biography : Vijayalakshmi received her B.Tech. degree in Electronics and Communication Engineering from Sri Siva Subramaniya Nadar (SSN) College of Engineering, India, in 2018,and is currently pursuing the M.Tech. (Research) degree at the Department of Electrical Communication Engineering, at the Indian Institute of Science, Bengaluru. Her research interests lie in the intersection of Computer Vision, Image and Video processing and Deep Learning.