Ajay Srinivasamurthy is an Applied Scientist with Amazon Alexa in Bangalore, working on large scale automatic speech recognition problems. Prior to joining Amazon, he was a postdoctoral researcher at Idiap Research Institute, Switzerland. He got his PhD in Music Technology from Universitat Pompeu Fabra, where he was a part of the CompMusic project and worked on several automatic rhythm analysis problems in Indian Art Music. He has a masters in Signal Processing from the Indian Institute of Science, Bangalore. He has diverse research interests in the areas of Speech, Audio and Music Signal Processing and Machine Learning.
Ravi Kiran Sarvadevabhatla is an Assistant Professor at the IIIT-Hyderabad. He obtained his Ph.D. from Indian Institute of Science (IISc), Bangalore in 2018. He has industrial R&D experience in India (Qualcomm, 2018) and US-based R&D companies (2008-2013). Ravi obtained his MS in Computer Science from University of Washington, Seattle, USA in 2008. His research interests include applied machine learning, deep learning, analytics for multi-modal multimedia data. For additional info, visit his webpage at https://ravika.github.io/.
Suryakanth Gangashetty obtained a PhD from Indian Institute of Technology Madras. He is presently an Assistant Professor in International Information of Institute Technology (IIIT) Hyderabad, Gachibowli, Telangana. His research interests are Speech Processing, Neural Networks, Signal Processing, Pattern Recognition, Soft Computing, Machine Learning, Image Processing, Natural Language Processing, Artificial Intelligence, Fuzzy Logic. He was the Local Organizing Committee Chair of Interspeech 2018, which was held in India for the first time.
Anbumani (Anbu) Subramanian is a Principal Engineer at Intel. His recent work includes the creation of IDD, an open dataset of Indian driving conditions for research towards autonomous and assisted driving. Prior to Intel, he was with Hewlett-Packard Labs, where he worked on large scale crowd-cloud systems for visual understanding, and depth-camera based multimodal interactions systems. In 2005, as a post-doc at Virginia Tech, he developed omnidirectional vision system for an autonomous surface vehicle, the first of its kind in the world. He started his career as a Software Engineer at IBM in 1996. He received his M.S. and Ph.D. from Virginia Tech, specializing in computer vision, and a B.E. from Bharathiyar University. He has several peer-reviewed publications, 10 patents and is a Senior Member of IEEE. His expertise and interests include computer vision, machine learning, and human-computer interaction.
Harshavardhan Sundar is a research scientist working in Amazon Alexa, Cambridge, USA, as part of the Acoustic Event Detection team. Before joining Amazon he was working as a visiting research scholar in the Language Technologies Institute, Carnegie Mellon University, Pittsburgh and later as a postdoctoral fellow in the Dept. of Electrical and Computer Engineering, Johns Hopkins University, Baltimore. He completed his Ph.D. from the Dept. Electrical and Communications Engineering in the Indian Institute of Science, Bangalore, India. His research interests mainly lie in the broader scope of using signal processing and machine learning techniques for understanding information in speech and audio signals and more specifically in the field of acoustic scene analysis focussing on the problems of source localization and event detection.