Talk Titles: “An energy-efficient probabilistic broadcast mechanism for ad-hoc networks” and “Revealing Disocclusions in Temporal View Synthesis through Infilling Vector prediction

Speakers: Vinay Kumar and Vijayalakshmi K

Vinay Kumar
Vijayalakshmi K

Talk 1 Recording YouTube Link: https://youtu.be/qG0x-eeg79I

Talk 2 Recording YouTube Link: https://youtu.be/qG0x-eeg79I

Talk 1 Abstract:

Motivated by applications in Wireless Sensor Networks and the Internet of Things, in this talk, we consider the problem of energy-efficient broadcasting on random geometric graphs (RGGs). A source node at the origin encodes k data packets of information into n > k coded packets and transmits them to all its one-hop neighbors. The encoding is such that, any node that receives at least k out of the n coded packets can retrieve the original k data packets. Every other node in the network follows a probabilistic forwarding protocol; upon reception of a previously unreceived packet, the node forwards it with probability p and does nothing with probability 1-p. We are interested in the minimum forwarding probability which ensures that a large fraction of nodes can decode the information from the source. We deem this a near-broadcast. The performance metric of interest is the expected total number of transmissions at this minimum forwarding probability, where the expectation is over both the forwarding protocol as well as the realization of the RGG. In comparison to probabilistic forwarding with no coding, our treatment of the problem indicates that, with a judicious choice of n, it is possible to reduce the expected total number of transmissions while ensuring a near-broadcast. Techniques from continuum percolation and ergodic theory are used to characterize the probabilistic broadcast algorithm.
Joint work with Navin Kashyap and D. Yogeshwaran.

Speaker 1 Biography:

Vinay Kumar received a B.E.(Hons.) in Electrical and Electronics Engineering and an M.Sc.(Hons.) in Mathematics from the Birla Institute of Technology and Sciences, Pilani. He is currently pursuing a Ph.D. from the Department of Electrical Communication Engineering at the Indian Institute of Science, Bengaluru. He was a recipient of the CISCO Ph.D. research fellowship between 2015-20. His research interests lie in areas of random graphs, percolation theory and distributed algorithms on networks.

Talk 2 Abstract:

Suppose a user is exploring a virtual environment on a head mounted display device. Given the past view of a rendered video frame and the most recent head position, our goal is to directly predict a future video frame without having to graphically render it. We refer to the above problem as egomotion-aware temporal view synthesis. The biggest challenge in such view synthesis through warping is the disocclusion of the background. In contrast to revealing the disoccluded regions through intensity-based infilling, we study the idea of an infilling vector to infill by pointing to a non-disoccluded region in the synthesized view. To exploit the structure of disocclusions created by camera motion during their infilling, we rely on two important cues, temporal correlation of infilling directions and depth. Our extensive experiments on a large-scale dataset that we build for evaluating temporal view synthesis, in addition to datasets in literature, demonstrate that our infilling vector prediction approach achieves superior quantitative and qualitative infilling performance compared to other approaches in literature.

Speaker 2 Biography:

Vijayalakshmi received her B.Tech. degree in Electronics and Communication Engineering from Sri Siva Subramaniya Nadar (SSN) College of Engineering, India, in 2018,and is currently pursuing the M.Tech. (Research) degree at the Department of Electrical Communication Engineering, at the Indian Institute of Science, Bengaluru. Her research interests lie in the intersection of Computer Vision, Image and Video processing and Deep Learning.

Recommended Articles

Leave a Reply

Your email address will not be published. Required fields are marked *