SPCOM2020

Welcome to SPCOM 2020!

INSTITUTIONAL SPONSOR

IIsc

TECHNICAL CO-SPONSORS

IEEE
IEEE

DIAMOND SPONSORS

Qualcomm
Cisco

GOLD SPONSORS

Texas Instruments
Amazon

SILVER SPONSOR

Tejas Networks

AWARDS SPONSOR

Springer Nature

Springer

AUTONOMOUS NAVIGATION

Organizers: Suresh Sundaram, Department of Aerospace Engineering, Indian Institute of Science and
Raghu Krishnapuram, Robert Bosch Centre for Cyber-Physical Systems, Indian Institute of Science
 Suresh Sundaram

Suresh Sundaram is an associate professor in Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India from 2018. He is leading the WIPRO-IISc joint research initiative on Artificial Intelligence driven Autonomous System. Prior to the current appointment, he was a faculty in School of Computer Science and Engineering, Nanyang Technological University, Singapore between 2010-2018. During the period, he served as a cluster director in Energy Research Institute @ NTU and deputy director at Robotics Research Center, NTU.

Dr. Suresh received his bachelor of engineering from the Bharathiar University in 1995 and his master in engineering (2001) and Ph.D from Indian Institute of Science in 2005. He has been associate editor in IEEE Transactions on Neural Networks and Learning System, Swarm and Evolutionary Computing and Evolving System. He was the general chair for IEEE Symposium Series on Computational Intelligence in 2018 and Publication chair in 2013. He published a book and more than 300 articles. His research interest include intelligent flight control, autonomous system, applied game theory and artificial intelligence.

 Raghu Krishnapuram

Raghu Krishnapuram is currently a Distinguished Member of Technical Staff, at the Robort Bosch Centre for Cyber-Physical Systems, IISc, Bangalore. He received his PhD in computer engineering from Carnegie Mellon University in 1987. Raghu worked in the academia in the US until the year 2000, initially at the Uni-versity of Missouri, Columbia, and later at Colorado School of Mines, Golden. From 2000 to 2013, he held various technical leadership positions at IBM Research India, and during 2014-15, he was at IBM T J Wat-son Center, New York, where he was a technical leader for cognitive computing research. Raghu was Pro-gram Manager, Financial Services, Xerox Research Centre – India, during 2015-16, and was Head, R&D and IP Cell, at Ramaiah Institute of Technology, Bangalore during 2016-2017.

Raghu’s research encompasses many aspects of machine learning, computer vision, text analytics, artificial intelligence, and data mining. He has published about 170 papers in journals and conferences. He has been recognized as a Master Inventor by IBM and has filed over 40 patent disclosures at the US Patent Office. He also served on the Technology Council of the IBM Academy of Technology. Raghu is also a Fellow of IEEE and the Indian National Academy of Engineers (INAE).

SPEAKER TITLE OF THE TALK
P. B. Sujit Cooperative UAV-UGV Team for Coverage and Monitoring Applications
Savitha Ramasamy Continual Learning for Autonomous Vehicles
Subramanian Anbumani IDD - A Driving Dataset for and from India
Subhashis Banerjee A Monocular SLAM based Joint 3D-2D based Method for Free Space Detection on Roads
Vikas Sindhwani Flying, Driving and Walking Robots that Learn from Experience
Kumar Vijay Mishra Automotive Radar Detection and Imaging with MIMO Signal Processing

Cooperative UAV-UGV Team for Coverage and Monitoring Applications

P. B. Sujit, Indian Institute of Science Education and Research (IISER) Bhopal
P. B. Sujit

Sujit P. B. is an Associate Professor in the Department of Electrical Engineering and Computer Science at IISER Bhopal. He received a bachelor’s degree from Bangalore University, India, Master’s degree from Visveswaraya Technological University, India, and Ph.D. from the Indian Institute of Science, Bangalore, India. His research interests include unmanned aerial and underwater vehicles, multi-robot systems, and human-robot interaction.

 Abstract: Monitoring large habitats are essential for security and biodiversity maintenance purposes. Manually performing these operations is tedious, time-consuming, and challenging in remote regions. Unmanned aerial vehicles (UAVs) can be used for operations, however, additional support is required to monitor unreachable regions and meeting limited UAV fuel constraints. In this talk, I discuss how we can use a team of UAVs and unmanned ground vehicles (UGVs) for monitoring large areas. I will present an optimal route planning approach for covering large areas using a UAV-UGV team meeting the refueling constraints and then proceed towards using multiple UAVs and UGV to monitor 1.5D and 2.5D terrains. Experimental demonstrations will be shown for proof-of-concept.

Continual Learning for Autonomous Vehicles

Savitha Ramasamy, Institute for Infocomm Research, Singapore
Savitha Ramasamy

Savitha Ramasamy is a principal Investigating Scientist at Institute for Infocomm Research, A*STAR, Singapore. She has been leading a team on Lifelong learning for AI at I2R Singapore, developing robust AI models for non-stationary environments. She has also been leading the Joint Lab with Singapore Airlines (Featured in Discovery Channel, 2019), developing real-time AI solutions for Airline Engineering and Operations. Earlier, she graduated from Nanyang Technological University, Singapore, and was a postdoc in the university until 2013.. She has published about 50 papers in international Journals and Conferences. Her PhD Thesis was published as a research monograph by Springer (Berlin, Heidelberg), 2013. Her current research interest includes lifelong learning AI, trustable and robust AI that can be applied across a wide range of applications including AME, Healthcare, and Sustainability.

 Abstract:Autonomous vehicles must be capable of understanding their environment to greater detail, in order to make driving decisions. Such an understanding stems from sensing the environment and having a robust AI model to derive inferences. This necessitates large volumes of data to train a robust AI model. Despite large volumes of data used in initial training, the data may not be all encompassing, and the autonomous vehicle may often encounter scenarios it has never seen before. This requires learning algorithms that can learn on the fly, and adapt the representations of the AI model with evolving data. Lifelong continual learning approaches enable such adaptations of AI model with evolving data under new scenarios, while remembering data from the past. In this talk, I will give a brief introduction of lifelong continual approaches and their strategies for continuous adaptations without catastrophic forgetting.

IDD - A Driving Dataset for and from India

Subramanian Anbumani, INTEL - India
Subramanian Anbumani

Anbumani (Anbu) Subramanian is a Principal Engineer at Intel. His recent work includes the creation of IDD, an open dataset of Indian driving conditions for research towards autonomous and assisted driving. Prior to Intel, he was with Hewlett-Packard Labs, where he worked on large scale crowd-cloud systems for visual understanding, and depth-camera based multimodal interactions systems. In 2005, as a post-doc at Virginia Tech, he developed omnidirectional vision system for an autonomous surface vehicle, the first of its kind in the world. He started his career as a Software Engineer at IBM in 1996. He received his M.S. and Ph.D. from Virginia Tech, specializing in computer vision, and a B.E. from Bharathiyar University. He has several peer-reviewed publications, 10 patents and is a Senior Member of IEEE. His expertise and interests include computer vision, machine learning, and human-computer interaction.

 Abstract: India Driving Dataset (IDD) is the world's first open dataset of driving conditions from India, created towards research in autonomous and assisted driving. It was released at the first AutoNUE workshop in ECCV 2018, along with a global data challenge. IDD contains a vast collection of images from diverse road conditions in India, with ground truth annotations on road objects and pixel-level image segmentation. The collection was expanded to include a multimodal dataset and was released at the second AutoNUE workshop in ICCV 2019. A tiny, light weight dataset, IDD-Lite was created towards experiments on personal-compute infrastructure. The most recent IDD-Temporal dataset contains images frames for temporal analysis. IDD has grown in popularity since its release 2018, and now has over 1000s of users across the world. IDD can be downloaded from http://idd.insaan.iiit.ac.in/ This talk will cover the creation, journey, leaderboard results from global teams and some possible applications with IDD.

A Monocular SLAM based Joint 3D-2D based Method for Free Space Detection on Roads

Subhashis Banerjee, Indian Institute of Technology (IIT) Delhi
Subhashis Banerjee

Subhashis Banerjee is a professor in the Department of Computer Science and Engineering at IIT Delhi, where he currently holds the Ministry of Urban Development Chair. Previously he has held the Microsoft and Naren Gupta chair professorships at IIT Delhi. He was the head of the department of computer science at IIT Delhi between 2004-2007 and head of the computer centre between 2009-2014. Subhashis is also associated with the School of Public Policy at IIT Delhi.

Subhashis' primary areas of research are computer vision and machine learning, with a special emphasis on geometric algorithms. He has been on the editorial boards of the International journal of Computer Vision and Computers and Graphics. He has also worked extensively on design of computing and networking infrastructure and IT services and developed the supercomputing infrastructure at IIT Delhi, which is the second largest in the country. He has been an academic visitor to Oxford university, Indian Institute of Science, UIUC, Hebrew University, EPFL, Microsoft Research among several others.

Recently he has also developed an interest in policy issues in digital identity, data and privacy protection, and fairness and reliability of machine learning algorithms.

Subhashis graduated in electrical Engineering from Jadavpur University in 1982 and did his Master's and PhD from the Indian Institute of Science in 1984 and 1989 respectively.

 Abstract: In the first part of the talk we will discuss some recent advances in algorithms and techniques for large scale 3D reconstruction and model creation from a collection of images and videos. In particular we will discuss image view graphs and Lie algebraic techniques for global motion averaging and optimisation techniques for bundle adjustment and model creation. In the second part of the talk we will present a monocular SLAM method based on the above techniques. We will also discuss an algorithm for joint 2D-2D segmentation and free-space detection on roads using the monocular SLAM

Flying, Driving and Walking Robots that Learn from Experience

Vikas Sindhwani, Robotics at Google
Vikas Sindhwani

Vikas Sindhwani is Research Scientist in the Google Brain team in New York where he leads a research group focused on solving a range of perception, learning, reasoning and control problems arising in Robotics. His interests are broadly in mathematical foundations and end-to-end design aspects of building large-scale, robust machine intelligence systems. He has been a keynote speaker at the Workshop on Algorithmic Foundations of Robotics (WAFR) and the North East Robotics Symposium (NERC). He received the best paper award at Uncertainty in Artificial Intelligence (UAI) 2013, the IBM Pat Goldberg Memorial Award in 2014, and was co-winner of the Knowledge Discovery and Data Mining (KDD) Cup in 2009. He serves on the editorial board of IEEE Transactions on Pattern Analysis and Machine Intelligence, and has been area chair and senior program committee member for International Conference on Learning Representations (ICLR) and Knowledge Discovery and Data Mining (KDD). He previously led a team of researchers in the Machine Learning group at IBM Research, NY. He has a PhD in Computer Science from the University of Chicago and a B.Tech in Engineering Physics from Indian Institute of Technology (IIT) Mumbai. His publications are available at: http://vikas.sindhwani.org/.

 Abstract: When the environment is highly dynamic and uncertain, autonomous robot navigation requires complex perceptual reasoning and an ability to continuously learn from experience. In this talk, I will present some vignettes of recent Robotics research at Google: how self-flying delivery drones executing high-speed package pickup and delivery missions can learn to be safe, how quadrupeds can learn to walk from scratch, and how self-driving cars can learn to interact like humans.

Automotive Radar Detection and Imaging with MIMO Signal Processing

Kumar Vijay Mishra, U. S. National Academies Harry Diamond Distinguished Fellow at the United States Army Research Laboratory (ARL), Adelphi
 Kumar Vijay Mishra

Dr. Kumar Vijay Mishra obtained a Ph.D. in electrical engineering and M.S. in mathematics from The University of Iowa in 2015, and M.S. in electrical engineering from Colorado State University in 2012, while working on NASA’s Global Precipitation Mission Ground Validation (GPM-GV) weather radars. He received his B. Tech. summa cum laude (Gold Medal, Honors) in electronics and communication engineering from the National Institute of Technology, Hamirpur (NITH), India in 2003. He is currently U. S. National Academies Harry Diamond Distinguished Fellow at the United States Army Research Laboratory (ARL), Adelphi; Technical Adviser to Singapore-based automotive radar start-up Hertzwell; and honorary Research Fellow at SnT - Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg. Previously, he had research appointments at Electronics and Radar Development Establishment (LRDE), Defence Research and Development Organisation (DRDO) Bengaluru; IIHR - Hydroscience & Engineering, Iowa City, IA; Mitsubishi Electric Research Labs, Cambridge, MA; Qualcomm, San Jose; and Technion - Israel Institute of Technology. He is the recipient of American Geophysical Union Editors' Citation for Excellence (2019), Royal Meteorological Society Quarterly Journal Editor's Prize (2017), Viterbi Postdoctoral Fellowship (2015, 2016), Lady Davis Postdoctoral Fellowship (2017), DRDO LRDE Scientist of the Year Award (2006), NITH Director’s Gold Medal (2003), and NITH Best Student Award (2003). He has received Best Paper Awards at IEEE MLSP 2019 and IEEE ACES Symposium 2019. His research interests include radar systems, signal processing, remote sensing, and electromagnetics.

 Abstract: Novel multiple-input, multiple-output (MIMO) signal processing techniques are increasingly used in automotive radar applications. These systems operate at millimeter-wave (mm-Wave) which provides transmission bandwidth that is several gigahertz wide to gain high range resolution. At mm-Wave, large MIMO antenna arrays are employed to compensate for severe path and propagation losses. The MIMO array configuration is also useful in achieving a high angular resolution with limited number of antenna elements. However, a large array not only leads to complexity in signal processing but also presents challenges in spectrum usage and a concurrent operation with spectrally coexisting multiple radars and communications systems. In this talk, we address these specific challenges and recent developments in joint-MIMO-radar-MIMO-communications spectrum sharing, displaced MIMO sensor imaging, interference mitigation, and space-time adaptive processing for transmit sequence design.

Copyright © SPCOM 2020