Talk Titles: “Distributed Correlation Testing” and “Deep Learning Methods for Audio-EEG Analysis”
Speakers: K.R.Sahasranand, Jaswanth Reddy Katthi
Talk 1 Recording YouTube Link: https://youtu.be/RfG2odFleuc
Talk 2 Recording YouTube Link: https://youtu.be/qG0x-eeg79I
Date and Time: August 27, 2021 (Friday), 5:00 PM to 6:30 PM
Talk 1 Abstract:
Applications in the Internet of Things (IoT) often demand enabling low-compute devices to perform distributed inference and testing by communicating over a low bandwidth link. This gives rise to a plethora of new problems which may broadly be termed resource-constrained statistical inference problems. In this talk, we will discuss the following distributed hypothesis testing problem. Two parties observing sequences of uniformly distributed bits want to determine if their bits were generated independently or not. To that end, the first party communicates to the second. A simple communication scheme involves taking as few sample bits as determined by the sample complexity of independence testing and sending it to the second party. But is there a scheme that uses fewer bits of communication than the sample complexity, perhaps by observing more sample bits? We show that the answer to this question is in the affirmative. Furthermore, we provide lower bounds that use hypercontractivity and reverse hypercontractivity to obtain a measure change bound between the joint and the independent distributions. The proposed scheme is then extended to handle composite correlation testing and high dimensional correlation testing.
Speaker 1 Biography:
K.R.Sahasranand received his B.Tech in Computer Science and Engineering from Amrita University, Kerala in 2009 and worked as a Scientist/Engineer at Indian Space Research Organisation, Bangalore, 2010-’11. He completed his Master’s in Electrical Communication Engineering from Indian Institute of Science, Bangalore in 2015 where he is currently a PhD student. His research interests include detection and estimation, information theory, and signal processing.
Talk 2 Abstract:
The perception of speech and audio is one of the defining features of humans. Much of the brain’s underlying processes as we listen to acoustic signals are unknown, and significant research efforts are needed to unravel them. The non-invasive recording techniques like EEG and MEG are commonly deployed to capture the brain responses to auditory stimuli. But they capture artifacts which distort the stimulus-response analysis, and their effect becomes more evident for naturalistic stimuli. To reduce the noise, pre-processing methods like the canonical correlation analysis (CCA) are prevalently used. However, these methods assume a simplistic linear relationship between the audio features and the EEG responses and therefore, may not alleviate the effect of recorded artifacts. In this talk, we discuss the novel methods using machine learning advances to improve the audio-EEG analysis. We discuss deep learning framework proposed for audio-EEG analysis in intra-subject and inter-subject settings. The deep learning models are trained with a Pearson correlation-based cost function among the stimuli and responses. The deep learning models are regularized for alleviating the effect of the abundant noise present in the EEG recordings. The framework allows us to obtain the optimum audio and EEG representations that are maximally correlated. We discuss their application on speech and music listening tasks, and conclude with a discussion on future directions in audio-EEG analysis.
Speaker 2 Biography:
Jaswanth Reddy Katthi has completed his undergrad from Jawaharlal Nehru Technological University, Anantapur in Electronics and Communication Engineering in 2017. He is an M. Tech (Research) student in EE department under the program System Science and Signal Processing, and has recently presented his thesis colloquium talk. Being a part of LEAP lab, he pursued his research under the guidance of Dr. Sriram Ganapathy. Currently, he is a camera systems engineer at Qualcomm Hyderabad’s Multimedia systems division. His research interests involve machine learning, signal processing and cognitive neuroscience.