Soma Biswas is an Associate Professor in the Department of Electrical Engineering IISc. She received her PhD degree in Electrical and Computer Engineering from University of Maryland, College Park, in 2009. Then she worked as a Research Faculty at University of Notre Dame and as a Research Scientist at GE Research before joining IISc. Her research interests include computer vision, pattern recognition and machine learning. She serves as Associate Editor for IEEE Transactions on Image Processing and is a senior member of IEEE
Vineeth Balasubramanian is an Associate Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology, Hyderabad (IIT-H), as well as serves as the Head of the Department of Artificial Intelligence at IIT-H. His research interests include deep learning, machine learning, and computer vision. His research has resulted in ~100 peer-reviewed publications at various international venues, including top-tier ones such as ICML, CVPR, ICCV, KDD, ICDM, and IEEE TPAMI. His PhD dissertation at Arizona State University on the Conformal Predictions framework was nominated for the Outstanding PhD Dissertation at the Department of Computer Science. He is an active reviewer/contributor at many conferences such as NeurIPS, CVPR, ICCV, AAAI, IJCAI with a recent award as an Outstanding Reviewer at CVPR 2019, as well as journals including IEEE TPAMI, IEEE TNNLS, JMLR and Machine Learning. He currently serves as the Secretary of the AAAI India Chapter. For more details, please see https://iith.ac.in/~vineethnb/.
Lalitha Vadlamani received her B.E. degree in Electronics and Communication Engineering from the Osmania University, Hyderabad, in 2003 and her M.E. and Ph.D. degrees from the Indian Institute of Science (IISc), Bangalore, in 2005 and 2015 respectively. From May 2015, she is working as Assistant professor in IIIT Hyderabad, where she is affiliated to Signal Processing and Communications Research Center. Prior to joining IIIT Hyderabad, she worked as a research intern at Microsoft Research, Bangalore. Her research interests include coding for distributed storage and computing, index coding, polar codes and learning-based codes.
Emanuele Viterbo (F’2011) received his degree (Laurea) in Electrical Engineering in 1989 and his Ph.D. in 1995 in Electrical Engineering, both from the Politecnico di Torino, Torino, Italy. From 1990 to 1992 he was with the European Patent Office, The Hague, The Netherlands, as a patent examiner in the field of dynamic recording and error-control coding. Between 1995 and 1997 he held a post-doctoral position in the Dipartimento di Elettronica of the Politecnico di Torino. In 1997-98 he was a post-doctoral research fellow in the Information Sciences Research Center of AT&T Research, Florham Park, NJ, USA. He became first Assistant Professor (1998) then Associate Professor (2005) in Dipartimento di Elettronica at Politecnico di Torino. In 2006 he became Full Professor in DEIS at University of Calabria, Italy. From September 2010 he is Professor in the ECSE Department and Associate Dean Graduate Research of the Faculty of Engineering at Monash University, Melbourne, Australia. Emanuele Viterbo is a 2011 Fellow of the IEEE, an ISI Highly Cited Researcher and Member of the Board of Governors of the IEEE Information Theory Society (2011-2013 and 2014-2018). He served as Associate Editor of IEEE Transactions on Information Theory, European Transactions on Telecommunications and Journal of Communications and Networks. His main research interests are in lattice codes for the Gaussian and fading channels, algebraic coding theory, algebraic space-time coding, digital terrestrial television broadcasting, and digital magnetic recording.
Dr. Yi Hong received the Ph.D. degree in Electrical Engineering and Telecommunications from University of New South Wales (Sydney, Australia). She is a Senior lecturer at Monash University. She was Director of Graduate Research (2016-2/18) of the ECSE Department at Monash. Currently serves on the Australian Research Council College of Experts. She is a Fellow of IET, a Senior Member of IEEE, a member of IEEE Communications Society, IEEE Information Theory Soceity, and IEEE Vehicular Technology Society. She will be the Tutorial Chair of 2021 IEEE International Symposium on Information Theory, Melbourne. She was the General co-Chair of 2014 IEEE Information Theory Workshop at Hobart, Tasmania; the Technical Program Committee Chair of the 2011 Australian Communication Theory workshop, Melbourne; the Publicity Chair of 2009 IEEE Information Theory Workshop at Taromina, Sicily; and Technical Program Committee Member of various IEEE leading conferences. She was Associate Editor of IEEE Wireless Communications Letters (WCL) and Transactions on Emerging Telecommunications Technologies (ETT). She received the NICTA-ACoRN Early Career Researcher award at AUSCTW Adelaide 2007. She leads a group of research students and postdocs working on topics in communications technology, wireless security, and coding techniques for wireless communications and networking as well as SSD storage.
Raviteja Patchava received the M.E. degree in telecommunications from the Indian Institute of Science, Bangalore, India in 2014, and the Ph.D. degree from Monash University, Australia in 2019. From 2014 to 2015, he worked at Qualcomm India Private Limited, Bangalore, on WLAN systems design. He is currently working as a postdoctoral research fellow with Monash University, Australia. His current research interests include new waveform designs for next generation wireless systems. He was a recipient of Prof. S. V. C. Aiya Medal from the Indian Institute of Science, Bangalore in 2014. He also received the best student paper award for his publication at the SPAWC 2018 conference in Greece.
Abstract: Emerging mass transportation systems – such as self-driving cars, high-speed trains, drones, flying cars, and supersonic flight – will challenge the design of future wireless networks due to high-mobility environments: a large number of high-mobility users require high data rates and low latencies. The physical layer modulation technique is a key design component to meet the system requirements of high mobility. Currently, orthogonal frequency division multiplexing (OFDM) is the modulation scheme deployed in 4G long term evolution (LTE) and 5G cellular systems, where the wireless channel typically exhibits time-varying multipath fading. OFDM can only achieve a near-capacity performance over a doubly dispersive channel with a low Doppler effect, but suffers heavy degradations under high Doppler conditions, typically found in high-mobility environments. Orthogonal time frequency space (OTFS) modulation has been recently proposed by Hadani et al. at WCNC’17, San Francisco. It was shown to provide significant advantages over OFDM in doubly dispersive channels. OTFS multiplexes each information symbol over a 2D orthogonal basis functions, specifically designed to combat the dynamics of the time-varying multipath channels. As a result, all information symbols experience a constant flat fading equivalent channel. OTFS is only in its infancy, leaving many opportunities for significant developments on both practical and theoretical fronts.
Sreeram Kannan is an assistant professor at University of Washington, Seattle, where he runs the information theory lab focussing on information theory and its applications in communication networks, machine learning and blockchain systems. He was a postdoctoral scholar at University of California, Berkeley between 2012-2014 before which he received his Ph.D. in Electrical and Computer Engineering and M.S. in mathematics from the University of Illinois Urbana Champaign.
Hyeji Kim is currently a researcher at Samsung AI Research Cambridge and will join the Electrical and Computer Engineering department of University of Texas at Austin as an assistant professor in Fall 2020. She was a postdoctoral research associate at the university of Illinois at Urbana Champaign from 2016 to 2018. She received her Ph.D. and M.S. degrees in Electrical Engineering from Stanford University in 2016 and 2013, respectively, and her B.S. degree with honors in Electrical Engineering from KAIST in 2011. She is a recipient of the Stanford Graduate Fellowship and participant of the Rising Stars in EECS Workshop in 2015.
Abstract: Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this tutorial we survey recent approaches on how to automate the discovery of codes and corresponding decoding algorithms via deep learning. (1) We study a family of sequential codes parameterized by recurrent neural network (RNN) architectures. We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong generalizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting. (2) We show that deep learning can be used to design both novel codes and decoders in canonical channels such as AWGN channel with and without feedback. (3) We discuss how meta-learning can be used to adapt the learnt decoders and encoders to varying channel conditions.
Alessandro Foi received the M.Sc. degree in Mathematics from the Università degli Studi di Milano, Italy, in 2001, the Ph.D. degree in Mathematics from the Politecnico di Milano in 2005, and the D.Sc.Tech. degree in Signal Processing from Tampere University of Technology, Finland, in 2007. He is Professor of Signal Processing at Tampere University, Finland.
His research interests include mathematical and statistical methods for signal processing, functional and harmonic analysis, and computational modeling of the human visual system. His work focuses on spatially adaptive (anisotropic, nonlocal) algorithms for the restoration and enhancement of digital images, on noise modeling for imaging devices, and on the optimal design of statistical transformations for the stabilization, normalization, and analysis of random data.
He is a Senior Member of the IEEE, Member of the Image, Video, and Multidimensional Signal Processing Technical Committee of the IEEE Signal Processing Society, an Associate Editor for the SIAM Journal on Imaging Sciences, and a Senior Area Editor for the IEEE Transactions on Computational Imaging.
Abstract: The additive white Gaussian noise (AWGN) model is ubiquitous in signal processing. This model is often justified by central-limit theorem (CLT) arguments. However, whereas the CLT may support a Gaussian distribution for the errors, it does not provide any justification for the assumed additivity and whiteness. As a matter of fact, data acquired in real applications can seldom be described with good approximation by the AWGN model, especially because errors are typically correlated and not additive. Failure to model accurately the noise leads to misleading analysis, ineffective filtering, and distortion or even failure in the estimation.
This tutorial provides an introduction to both signal-dependent and correlated noise and to the relevant models and methods for the analysis and practical processing of signals corrupted by these types of noise. The distribution families covered as leading examples in the tutorial include Poisson, Rayleigh, Rice, multiplicative families, as well as doubly censored distributions. We consider various forms of noise correlation, encompassing pixel and read-out cross-talk, fixed-pattern noise, column noise, etc., as well as related issues like photo-response and gain non-uniformities, and processing-induced noise correlation. Consequently, the introduced models and techniques are applicable to several important imaging scenarios and technologies, such as raw data from digital camera sensors, various types of radiation imaging relevant to security and to biomedical imaging, ultrasound and seismic sensing, magnetic resonance imaging, synthetic aperture radar imaging, photon-limited imaging of faint astronomical sources, microbolometer arrays for long-wavelength infrared imaging in thermography, etc.
The tutorial is accompanied by numerous experimental examples where the presented methods are applied to competitive signal processing problems, often achieving the state of the art in image and multidimensional data restoration.
Srikanth obtained his B.E., degree from College of Engineering, Anna University, Chennai, and M. A.Sc, and Ph. D. degrees from University of Victoria, British Columbia, Canada. He is currently the Chief Knowledge Officer for Nanocell Networks Pvt. Ltd. and also consults for various companies. Srikanth began his career as a research associate at the University of Victoria, British Columbia, Canada working in the area of DSL and CDMA Systems. After his Ph. D., he joined Harris Corporation and worked on baseband algorithms for various wireless standards including IS-136 and 1S-95 systems. He then joined the KBC Research Foundation/AU-KBC Research Centre and he has focused on OFDM systems for powerline and wireless applications. From 2004-2007 he was awarded a Young Scientist Fellowship by the Government of India to work on technologies related to upgrades on 802.11 and 802.16 standards. He has been following 802.11 and 4G/5G standards and delivers several courses to corporate clients in this area. He has consulted on various areas of OFDM systems and has also been involved in the setting up of a test lab for 802.11. He is also involved with the organization of WiFi and 5G knowledge summits to help develop a community for exchange of technical information in these areas.
Abstract: 5G wireless and IEEE 802.11ax based WiFi 6 are the 2 most important technical standards that are of importance to consumers worldwide. We shall start with a brief outline of the evolution of both the cellular wireless and WLAN standards. The initial goals of the 2 wireless access mechanisms and their possible collision/convergence will be highlighted. We shall then summarize the key technologies which have been included as a part of these standards from 3GPP and IEEE. For instance, both standards have some things in common like OFDMA, MIMO amongst a few other things. The relevance of the 2 standards in different use cases as well as points about whether they are competitive or cooperative with one another will be addressed. Future trends of convergence which have been discussed in the literature and efforts in the standardization will be covered. We shall conclude with future trajectories of both the cellular and WLAN evolutions.
Jean-Francois Chamberland is a professor in the Department of Electrical and Computer Engineering at Texas A&M University. He completed a Bachelor of Engineering degree at McGill University, a Master of Science degree at Cornell University, and a Doctorate at the University of Illinois, Urbana-Champaign. His research interests are in the areas of communication and information theory, decision and statistical inference, computer systems and networks, applied probability, and learning. Recently, he has been studying the efficient design of sensing systems and the fundamental limits of wireless communication networks. Applications for his work range from autonomous and connected vehicles to precision agriculture and smart infrastructures. His contributions have been recognized through an IEEE Young Author Best Paper Award from the IEEE Signal Processing Society and a Faculty Early Career Development (CAREER) Award from the National Science Foundation (NSF). He was also invited to present educational innovations at the Frontiers of Engineering Education Symposium. He served as associate department head at Texas A&M University, and he is currently an associate editor for the IEEE Transactions on Information Theory.
Krishna Narayanan received the B.E. degree from Coimbatore Institute of Technology in 1992, the M.S. degree from Iowa State University in 1994, and the Ph.D. degree in Electrical Engineering from Georgia Institute of Technology in 1998. Since 1998, he has been with the Department of Electrical and Computer Engineering at Texas A&M University, where he is currently the Eric D. Rubin professor. His research interests are broadly in coding theory, information theory, and signal processing with applications to wireless networks, data storage and data science. His current research interests are in the design of uncoordinated multiple access schemes, coding for distributed computing, exploring connections between sparse signal recovery and coding theory, and analyzing data defined on graphs. He is passionate about technology-enabled teaching and innovative pedagogical approaches. He was the recipient of the NSF CAREER award in 2001. He also received a 2006 best paper award from the IEEE technical committee for signal processing and a 2018 best paper award at the conference on vehicular electronics and safety. He served as an associate editor for coding techniques for the IEEE Transactions on Information Theory, 2016–2019. He served as the area editor (and as an editor) for the coding theory and applications area of the IEEE Transactions on Communications from 2007 until 2011. In 2014, he received the Professional Progress in Engineering award given to an outstanding alumnus of Iowa State University. He was elected a Fellow of the IEEE for contributions to coding for wireless communications and data storage.
Abstract: Sparsity has played a key role in the design and analysis of several new algorithms in the recent past, with applications ranging from statistical signal processing to wireless communication systems and networks. This is evinced by a growing collection of related computer programs available through software libraries. Yet, the sheer size of some engineering problems renders the straightforward application of standard solvers impractical. This shortcoming forms the impetus behind the development of coded compressed sensing (CCS), an algorithmic framework tailored to sparse recovery in exceedingly large dimensional spaces. This novel paradigm adopts a divide-and-conquer approach to break large sparse recovery tasks into sub-components whose dimensions are amenable to standard compressed sensing solvers and related iterative methods. The tutorial will establish CCS as a conceptual architecture for problems in very large spaces and survey promising extensions and alternatives. The presentation will include pertinent notions from signal processing and coding theory, with a historical perspective on the rapid evolution of this topic. The discussion will include tree codes, iterative methods, and the role of approximate message passing in producing pragmatic solutions with manageable complexity. It will also cover the state-of-the-art in this research area, ongoing developments from various groups, and open challenges.
Mohammad Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. He is also a visiting professor at the Tokyo University of Agriculture and Technology (TUAT). Previously, he was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For the past 10 years, his work has focused on developing Bayesian methods that could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.
Abstract: Deep learning and Bayesian learning are considered two entirely different fields often used in complementary settings. It is clear that combining ideas from the two fields would be beneficial, but how can we achieve this given their fundamental differences?
This tutorial will introduce modern Bayesian principles to bridge this gap. Using these principles, we can derive a range of learning-algorithms as special cases, e.g., from classical algorithms, such as linear regression and forward-backward algorithms, to modern deep-learning algorithms, such as SGD, RMSprop and Adam. This view then enables new ways to improve aspects of deep learning, e.g., with uncertainty, robustness, and interpretation. It also enables the design of new methods to tackle challenging problems, such as those arising in active learning, continual learning, reinforcement learning, etc.
Overall, our goal is to bring Bayesians and deep-learners closer than ever before, and motivate them to work together to solve challenging real-world problems by combining their strengths.
Geert Leus received the M.Sc. and Ph.D. degree in Electrical Engineering from the KU Leuven, Belgium, in June 1996 and May 2000, respectively. Geert Leus is now an "Antoni van Leeuwenhoek" Full Professor at the Faculty of Electrical Engineering, Mathematics and Computer Science of the Delft University of Technology, The Netherlands. His research interests are in the broad area of signal processing, with a specific focus on wireless communications, array processing, sensor networks, and graph signal processing. Geert Leus received a 2002 IEEE Signal Processing Society Young Author Best Paper Award and a 2005 IEEE Signal Processing Society Best Paper Award. He is a Fellow of the IEEE and a Fellow of EURASIP. Geert Leus was a Member-at-Large of the Board of Governors of the IEEE Signal Processing Society, the Chair of the IEEE Signal Processing for Communications and Networking Technical Committee, a Member of the IEEE Sensor Array and Multichannel Technical Committee, and the Editor in Chief of the EURASIP Journal on Advances in Signal Processing. He was also on the Editorial Boards of the IEEE Transactions on Signal Processing, the IEEE Transactions on Wireless Communications, the IEEE Signal Processing Letters, and the EURASIP Journal on Advances in Signal Processing. Currently, he is the Chair of the EURASIP Technical Area Committee on Signal Processing for Multisensor Systems, a Member of the IEEE Signal Processing Theory and Methods Technical Committee, a Member of the IEEE Big Data Special Interest Group, an Associate Editor of Foundations and Trends in Signal Processing, and the Editor in Chief of EURASIP Signal Processing.
Elvin Isufi received the Ph.D. degree in Signal Processing from the Delft University of Technology, The Netherlands, in 2019. He received the B.Sc. and the M.Sc. (cum laude) in Electrical Engineering from the University of Perugia, Italy, respectively in 2012 and 2014. Since July 2019, Elvin Isufi is an Assistant Professor in the Multimedia Computing Group at the Faculty of Electrical Engineering, Mathematics and Computer Science of the Delft University of Technology, The Netherlands. From Dec. 2018 to July 2019, he was a Postdoctoral Researcher in the Department of Electrical and Systems Engineering of the University of Pennsylvania, USA. Elvin Isufi was also a Visiting Researcher in the LTS2 laboratory of the EPFL in 2018; in the Department of Engineering of the University of Perugia in 2017; and in the Circuits and Systems Group of the Delft University of Technology in 2013. The research interests of Elvin Isufi lie in the intersection of signal processing, mathematical modeling, machine learning, and network theory. Applications of interest are in recommender systems, biological networks, and sensor networks. Elvin Isufi was a holder of the AdisuPG scholarship for an interrupted period of five years (2008-2013) and recipient of the Student of Excellence Award 2013 by the University of Perugia, Italy. He is also co-recipient of the IEEE CAMSAP 2017 Best Student Paper Award. Elvin Isufi is currently in the organizing committee of EURASIP EUSIPCO 2020 (Publicity Chair).
Mario Coutino received the M.Sc. (cum laude) degree in electrical engineering from the Delft University of Technology, Delft, The Netherlands, in 2016. Since October 2016, he has been working toward his Ph.D. degree in signal processing with the Delft University of Technology, Delft, The Netherlands. He was with the R&D groups of Thales Nederlands (NL), during 2015, and Bang & Olufsen (DK), during 2015–2016, working on radar and room acoustics, respectively. He was a guest researcher in RIKEN AIP (JP), during 2018, working on theoretical computer science and graph theory. In 2019, he was a visiting researcher in the Digital Technology Centre of the University of Minnesota (US) working on network data modelling. His research interests include signal processing on networks, submodular and convex optimization, combinatorics, and numerical linear algebra. He was the recipient of the Best Student Paper Award for his publication at the CAMSAP 2017 conference in Curacao. Mario Coutino is a CONACYT (MX) researcher.
Abstract: One of the cornerstones of the field of graph signal processing are graph filters, direct analogues of time-domain filters, but intended for signals defined on graphs. In this talk, we give an overview of the graph filtering problem. More specifically, we look at the family of finite impulse response (FIR) and infinite impulse response (IIR) graph filters and show how they can be implemented in a distributed manner. To further limit the communication and computational complexity of such a distributed implementation, we also generalize the state-of-the-art distributed graph filters to filters whose weights show a dependency on the nodes sharing information. These so-called edge-variant graph filters yield significant benefits in terms of filter order reduction and can be used for solving specific distributed optimization problems with an extremely fast convergence. Finally, we will overview how graph filters can be used in deep learning applications involving data sets with an irregular structure. Different types of graph filters can be used in the convolution step of graph convolutional networks leading to different trade-offs in performance and complexity. The numerical results presented in this talk illustrate the potential of graph filters in distributed optimization and deep learning.
Vinod Sharma received his B.Tech. in EE in 1978 from Indian Institute of Technology, Delhi, India and Ph.D. in ECE from Carnegie Mellon University in 1984. He worked at Northeastern University, Boston and University of California in Los Angeles before joining Indian Institute of Science, Bangalore in 1988, where currently he is a TATACHEM chair professor. He won Prof. Rustom Choksi award for excellence in research in 2017 and ACCS-CDAC award in 2012. He is an elected fellow of Indian National Academy of Engineering (INAE). His research interests are in wireless communications, information theory and communication networks.
Arun Padakandla received the M.Sc. degree in Electrical Communication Engineering from the Indian Institute of Science, Bengaluru, in 2008. Following a brief stint as a Research Engineer at Ericsson Research, San Jose, he joined the NSF Center for Science of Information, as a Post-Doctoral Research Fellow, in 2015. Since 2018, he has been on the faculty of the Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville. His research interest lies in classical and quantum information theory.
Abstract: Our current design of efficient computing devices is based on the Church-Turing thesis which states that a probabilistic Turing machine serves as a universal model of computation. Enquiring whether the laws of physics can be used to argue the universality of a model of computation, Deutsch studied computing devices based on axioms of quantum mechanics. His ingenious algorithm followed by research in the ensuing decade culminated in Shor's 1994 demonstration that two enormously important problems - factoring an integer, and the discrete logarithm problem - can be solved efficiently on a quantum computer - a computing device based on axioms of quantum mechanics. Fuelled by these fundamental results, subsequent research indicates that we maybe at the cusp of a new age in computing and information processing. This tutorial is aimed to introduce the basics of quantum computing and information to a broad audience. The tutorial is self contained and no pre-requisites in physics is necessary to follow the contents.
The tutorial is divided into three parts. In the first part, we introduce the axioms of quantum mechanics and present simple protocols and quantum algorithms that demonstrate the power of quantum computers. The second part of the tutorial focusses on quantum error correction. The elements involved in quantum computation are prone to errors and faults. Fault-Tolerant quantum computation relies on robust quantum error correction tecnhniques. In this second part, we introduce a few quantum error correction techniques and provide pointers to research in this active research area. In the third part, we present basic elements of quantum information theory. We introduce the notions of Von Nuemann entropy and quantum typicality. We leverage these tools to illustrate two fundamental coding theorems - Schumacher compression and the Holevo Schumacher Westmoreland capaciy of Classical-Quantum channels.