Tutorials are free for all registered participants.
VENUE | SESSION CHAIR | TITLE | SPEAKER(S) |
---|---|---|---|
Golden Jubilee Hall, ECE | Navin Kashyap | T1: Approximate Message Passing Algorithms for High-Dimensional Estimation |
Cynthia Rush (Columbia University) |
MP 20 | Kunal Narayan Chaudhury | T3: The Unlimited Sensing Framework: Theory, Algorithms, Hardware and Applications |
Ayush Bhandari (Imperial College London) |
VENUE | SESSION CHAIR | TITLE | SPEAKER(S) |
---|---|---|---|
Golden Jubilee Hall, ECE | Gugan Thoppe | T2: Theory and Experiments for Peer Review and Other Distributed Human Evaluations |
Nihar Shah (Carnegie Mellon University) |
MP 20 | Neelesh Mehta | T4: IEEE 802.11ad Based Intelligent Joint Radar Communication Transceiver: Design, Prototype and Performance Analysis |
Shobha Sundar Ram (Indraprastha Institute of Information Technology Delhi) Sumit J. Darak (Indraprastha Institute of Information Technology Delhi) |
ECE 1.08 | E S Shivaleela | T5: Applying Artificial Intelligence Effectively to Signal Processing Applications |
Anand Mukhopadhyay (Mathworks) Nikita Pinto (Mathworks) Praful Pai (Mathworks) |
Cynthia Rush is an Associate Professor of Statistics at Columbia University. She received a Ph.D. and M.A. in Statistics from Yale University in 2016 and 2011, respectively, and she completed her undergraduate coursework at the University of North Carolina at Chapel Hill where she obtained a B.S. in Mathematics in 2010. She received a NSF CRIII award in 2019, was a finalist for the 2016 IEEE Jack K. Wolf ISIT Student Paper Award, was an NTT Research Fellow at the Simons Institute for the Theory of Computing for the program on Probability, Computation, and Geometry in High Dimensions in Fall 2020, and was a Google Research Fellow at the Simons Institute for the Theory of Computing for the program on Computational Complexity of Statistical Inference in Fall 2021. Her research focuses on message passing algorithms, statistical robustness, and applications to wireless communications.
Abstract: Approximate Message Passing (AMP) refers to a class of iterative algorithms that have been successfully applied to a number of high-dimensional statistical estimation problems like linear regression, generalized linear models, and low-rank matrix estimation, and a variety of engineering and computer science applications such as imaging, communications, and deep learning. AMP algorithms have two features that make them particularly attractive: they can easily be tailored to take advantage of prior information on the structure of the signal, such as sparsity, and under suitable assumptions on a design matrix, AMP theory provides precise asymptotic guarantees for statistical procedures in the high-dimensional regime. In this tutorial, I will present the main ideas of AMP from a statistical perspective to illustrate the power and flexibility of the AMP framework. We will discuss the algorithms' use for characterizing the exact performance of a large class of statistical estimators in the high-dimensional proportional asymptotic regime and we will discuss the algorithms' applications in wireless communication.
Nihar B. Shah is an Associate Professor in the Machine Learning and Computer Science departments at Carnegie Mellon University (CMU) and a proud alumnus of the Indian Institute of Science (IISc). His research broadly lies in the fields of statistics, machine learning, information theory, and game theory, with a focus on applications to learning from people. He is a recipient of a JP Morgan Faculty Award 2022, Google Research Scholar Award 2021, NSF CAREER Award 2020-25, Best Paper runner up at HCOMP 2022, the 2017 David J. Sakrison memorial prize at UC Berkeley for a "truly outstanding and innovative PhD thesis", the Microsoft Research PhD Fellowship 2014-16, the Berkeley Fellowship 2011-13, the IEEE Data Storage Best Paper and Best Student Paper Awards for the years 2011/2012, and the SVC Aiya Medal 2010 from IISc, has supervised the Best Student Paper at AAMAS 2019, and has also received multiple workshop paper awards.
Abstract: Peer review is the backbone of scientific research. It has a strong influence on the progress of research, careers of researchers, public perception of science, and the allocation of billions of dollars of grants. Despite its importance, peer review faces numerous challenges, including those of inappropriately chosen reviewers, fraud, miscalibration, subjectivity and others. Similar problems arise in many other applications involving distributed evaluations, such as admissions and hiring. In this tutorial, we will discuss this important application domain from two perspectives. We will obtain insights into the process via a number of carefully designed experiments that will be fun to know about. We will then look at it from an algorithmic and theoretical lens. We will discuss mathematical formulations of several of these problems, algorithms that are deployed today for review of thousands of papers, and associated theoretical guarantees. We will also spend time discussing open problems that if solved can be highly impactful. Much of the contents presented will also carry over to other applications such as hiring, admissions, crowdsourcing etc. that involve distributed human evaluations. No prior background will be assumed other than basic mathematical maturity. The material presented is based on the survey http://www.cs.cmu.edu/~nihars/preprints/SurveyPeerReview.pdf
Suggested Reading Material and other Resources
The tutorial will be entirely self contained and doesn't need any advance reading. That said, here are two resources if participants want for reference before, during or after the tutorial:
1. Survey article: https://www.cs.cmu.edu/~nihars/preprints/SurveyPeerReview.pdf
2. Slides: https://cs.cmu.edu/~nihars/tutorials/SPCOM2024/SPCOMtutorial2024slides.pdf
Ayush Bhandari received the Ph.D. degree from Massachusetts Institute of Technology (MIT), Cambridge, MA, USA, in 2018, for his work on computational sensing and imaging which, in part, led to the co-authored open-access book Computational Imaging in MIT Press. He is currently a faculty member with the Department of Electrical and Electronic Engineering, Imperial College London, U. K. He has held research positions at INRIA (Rennes), France, Nanyang Technological University, Singapore, the Chinese University of Hong Kong and Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland among other institutes. He was appointed the August–Wilhelm Scheer Visiting Professor (Department of Mathematics), in 2019 by the Technical University of Munich.
He has served as an invited/keynote speaker at several venues, in addition to leading tutorials at flagship conferences such as ACM Siggraph (2014, 2015) and IEEE ICCV (2015). Some aspects of his work have led to new sensing and imaging modalities which have been widely covered in press and media (e.g. BBC news). Applied aspects of his research have led to more than 10 US patents. His scientific contributions have led to numerous prizes, most recently, the Best Paper Award at IEEE ICCP 2020 (Intl. Conf. on Computational Photography) and the Best Student Paper Award (senior co-author) at IEEE ICASSP 2019 (Intl. Conf. on Acoustics, Speech and Signal Processing). In 2020, his doctoral work was awarded the IEEE Best PhD Dissertation Award from the Signal Processing Society. In 2021, he received the President's Medal for Outstanding Early Career Researcher at Imperial College London. In 2023, he was the recipient of the Frontiers in Science Award at the International Congress of Basic Science (ICBS) for his work on Unlimited Sensing.
Abstract: Digital data capture is the backbone of all modern day systems and “Digital Revolution” has been aptly termed as the Third Industrial Revolution. Underpinning the digital representation is the Shannon-Nyquist sampling theorem and more recent developments such as compressive sensing approaches. The fact that there is a physical limit to which sensors can measure amplitudes poses a fundamental bottleneck when it comes to leveraging the performance guaranteed by recovery algorithms. In practice, whenever a physical signal exceeds the maximum recordable range, the sensor saturates, resulting in permanent information loss. Notable instances include dosimeter saturation during the Chernobyl reactor accident, underreporting radiation levels, and self-driving cars losing visual cues when emerging from tunnels into bright light.
To reconcile this gap between theory and practice, a computational sensing approach was recently introduced—the Unlimited Sensing framework (USF)—that is based on a co-design of hardware and algorithms. On the hardware front, the USF uses a radically different analog-to-digital converter (ADC) design, which allows for the ADCs to produce modulo or folded samples. On the algorithms front, new, mathematically guaranteed recovery strategies perform reconstruction.
This tutorial aims to introduce this emerging field to both beginners and seasoned experts. We start by laying down the fundamentals of time and frequency domain USF, establishing a sampling theorem parallel to the Shannon-Nyquist criterion. Despite the non-linear nature of the sensing pipeline, the required sampling rate remains dictated by the signal bandwidth, laying a solid foundation for the discussion. We enhance theoretical insights with a hardware demonstration of the modulo ADC in practice, motiving new problems.
Expanding on the foundational sampling theory, we explore variations applicable to different signal types—smooth, sparse, and parametric functions—and sampling architectures, including One-Bit, Event-Triggered, and sub-Nyquist approaches.
By integrating USF with practical, interdisciplinary applications, we survey its impact across several domains. This encompasses computational imaging, where USF enables breakthroughs such as single-shot HDR imaging, time-of-flight imaging, and HDR computed tomography using the modulo Radon transform. Furthermore, we delve into USF's role in enhancing sensor array processing, including direction of arrival (DoA) estimation and radar systems, as well as its importance in digital communications, exemplified by Massive Multiple Input Multiple Output (MIMO) systems.
This exploration underscores USF's versatility and its potential to revolutionize various aspects of technology and research, while also providing clear directions for new research questions and applications.
Suggested Reading Material and other Resources
Shobha Sundar Ram received her Bachelor of Technology in ECE from the University of Madras, India in 2004 and then her Master of Science and Ph.D. in electrical engineering from the University of Texas at Austin, USA in 2006 and 2009 respectively. She worked as a research and development electrical engineer at Baker Hughes Inc. USA from 2009 to 2013. She joined Indraprastha Institute of Information Technology as an Assistant Professor in 2013 where she is currently an Associate Professor in the Dept. of Electronics and Communications Engineering.
She is primarily engaged in research and education principally in the areas of radar signal processing and electromagnetic sensor design and modeling. She is a Senior Member of IEEE, an Associate Editor for the IEEE Transactions on Aerospace and Electronics Systems, a member of the IEEE Radar Systems Panel for AESS Society, a member of the Synthetic Aperture Technical Working Group for the IEEE Signal Processing Society, Guest Editor for IET Radar Sonar and Navigations and Topics Editor for Frontiers on Signal Processing. She has served as a Technical Program Committee member for the 2016 IEEE Asia Pacific Microwave Conference, New Delhi India, the 2015 IEEE Radar Conference, the 2020 IEEE Advanced Networks and Telecommunications Systems and the 2017 IEEE MTT-S International Microwave & RF Conference, Ahmadabad, India. Over the last decade, she has received research grants from Bharat Electronics Ltd., Continental Inc., Air Force Research Lab USA, and the Ministry of Electronics and Information Technology from the Government of India. She has won student paper awards for the IEEE Radar Conference in 2008 and 2009 and won best paper awards for co-authored papers at the International Workshop on Non-intrusive Load Monitoring 2016, IEEE International Microwave and RF Conference 2019 and IEEE Radar Conference 2020. She has won the 2022 Qualcomm Innovation Fellowship India award. She has won the Teaching Excellence Awards based on student feedback in 2016 and 2018.
Sumit J. Darak received a bachelor’s degree in Electronic and Communication Engineering from Pune University, India, in 2007 and a Ph.D. from the School of Computer Engineering, Nanyang Technological University (NTU), Singapore, in 2013. He is currently an Associate Professor and Dean of Academic Affairs (DoAA) at IIIT-Delhi, India. Before joining IIIT Delhi, he worked as a Post-Doctoral Researcher at CentraleSupélec, France, from March 2013 to November 2014. He worked as a 5G Consultant with VVDN Technologies, Chennai, India, from March 2019 to September 2021. He is working as an SoC Consultant with Apexplus Technologies, Hyderabad, India, since June 2022. His current research interests include the design of efficient algorithms and mapping to reconfigurable and intelligent architectures for wireless, radar and artificial intelligence (AI) applications.
Dr. Darak is a recipient of the DST Inspire Faculty Award from 2015 to 2020, the Best Demo Award at CROWNCOM 2016, the Second-Best Student Paper Award at IEEE Digital Avionics Systems Conference (DASC) in 2017, the Young Scientist Paper Award at URSI 2014 and 2017, the Second-Best Poster Award at COMSNETS 2019, Best Paper Award in AIML Systems 2021, Design Contest Award in VLSID 2022 and 2023, IEEE APCCAS 2024 Design Contest Award, and Best Paper Award at IET NCSIPA 2009. From industry, he has received National Instruments (NI) Academic Research Grant in 2017 and 2018, the Qualcomm Innovation Fellowship in 2022 and 2023. From government, he has received various research grants from DST, DRDO and MeiTy. At IIIT Delhi, Dr. Sumit is recipient of Teaching
Excellence Awards in 2018, 2019 and 2020 along with Research Excellence Awards in 2021 and 2023.
Abstract: Millimeter wave communication will potentially form the backbone of vehicle-to-X communications in next generation intelligent transportation systems due to the high bandwidth. Existing sub-6 GHz V2X communication strategies such as dedicated short range communication services on IEEE802.11p based wireless technology, device-to-device (D2D-LTE V2X) communications and cellular LTE-V2X communications modes operate below 6GHz and hence are restricted to tens of megabits per second data rates with latency of the order of few milliseconds. Due to the high propagation loss at millimeter wave carrier frequencies, they are meant to operate in short range line-of-sight conditions with highly directional beams realized through beamforming. In high mobility environments, rapid beam training will result in considerable overhead and significant deterioration of latency. Alternatively, auxiliary sensors such as GPS or standalone radars can aid in beam alignment of the communication systems. However, the deployment of auxiliary sensors increase the cost and complexity in terms of synchronization and data processing as well as pose challenges in terms of interference.
An integrated sensing and communications framework based on IEEE 802.11ad protocol is presented, in the talk, to overcome these limitations. The augmentation of the radar functionality within the existing communication framework will result in both systems cohabiting a common spectrum and sharing hardware resources. This will reduce cost and complexity as well as mitigate interference. The talk will present the recommendation to enable the IEEE 802.11ad framework to support for radar based localization of mobile users while maintaining compatibility with the existing communication protocol requirements; signal processing and online learning based algorithms for supporting radar based detection and localization of mobile users as well as estimation of channel conditions in both static and quasi-static scenarios. In the end, the complete end-to-end software prototype for baseband, digital and analog front-end of the dual functional transceiver at the base station; the hardware prototype of the ISAC via hardware-software co-design and fixed point analysis; and the evaluation of the improvement of the communication link metrics (throughput, bit error rate) due to ISAC will be demonstrated.
Suggested Reading Material and other Resources
Anand Mukhppadhay is a Senior Engineer – Education Team based out of MathWorks Bangalore. He works with different Institutes and educators across India to support them in their curriculum and research activities. Prior to joining MathWorks, he did his PhD in the topic of neuromorphic computing applications from IIT Kharagpur in 2021, M. Tech. in VLSI design and embedded systems from NIT Rourkela in 2014, B. Tech in Electronics and Communication Engineering from Techno India College of Technology in 2012. He has authored and co-authored more than 15 research articles in journals and conferences. His area of interests revolves around digital VLSI and artificial intelligence applications.
Nikita Pinto is the Senior Artificial Intelligence Academic Liaison for Asia-Pacific at MathWorks. She is interested in understanding AI techniques and applying them to scientific and engineering problems. Nikita has a deep-seated fascination with the rapidly evolving landscape of Generative AI and its potential to revolutionise research in engineering and science. She has a Master's degree in Ocean Engineering from IIT Madras, where she worked on applying statistical signal processing to underwater acoustics. She has worked on interdisciplinary projects with international collaborators, Government research labs, and independent researchers.
Praful Pai is a Manager, EECS, Signal Processing and Communications at MathWorks, makers of MATLAB & Simulink. He collaborates with educators and researchers at universities worldwide to make engineering education engaging using computational thinking, and accelerate research and development in signal processing and wireless communication with model-based design.
Abstract: Artificial Intelligence (AI) offers new opportunities to improve signal processing systems for various real-world signals, such as audio, biomedical, radar, and wireless. Building AI systems throws up challenges ranging from managing large multimodal datasets, data preprocessing and feature extraction as required, model creation, comparing performance across multiple models, fine tuning the model, and deploying models for use in real life applications.
MATLAB helps address these issues through interactively preprocessing of data, feature engineering, building AI models, and deployment AI systems on the edge or the cloud. In this workshop, we will discuss how to address these challenges effectively using MATLAB and Simulink.
The session will comprise of 4 sections, each with a hands-on component:
1. Feature Engineering and Extraction: Exploring wavelet and time-frequency transformations and their effective use in AI applications.
2. Application of Long-Short Term Memory (LSTM) Networks: Solving multi-class signal classification problems using LSTM networks.
3. Integration of MATLAB/Simulink and Python-based Frameworks: Combining MATLAB/Simulink with frameworks such as TensorFlow and PyTorch.
4. Open Discussion: Addressing research challenges and discussing topics raised by the audience.
At the end of this workshop, participants can expect to be familiar with AI workflows for signal processing applications. The MathWorks team will also provide participants with cloud-based GPU access for the duration of the workshop.