SPCOM2020

Welcome to SPCOM 2020!

INSTITUTIONAL SPONSOR

IIsc

TECHNICAL CO-SPONSORS

IEEE
IEEE

DIAMOND SPONSORS

Qualcomm
Cisco

GOLD SPONSORS

Texas Instruments
Amazon

SILVER SPONSOR

Tejas Networks

AWARDS SPONSOR

Springer Nature

Springer

INDUSTRY TRACK

Organizers: Sri Garimella, Senior Manager, Machine Learning @ Amazon,
Lokesh Boregowda, Director, Samsung R&D Institute, Bangalore ,
Jay Gubbi, Senior Scientist, TCS Research and Innovation, Bengaluru, and
Ganesan Thiagarajan, Director and CTO, MMRFIC Technology Pvt. Ltd.
Sri Garimella

Sri Garimella is a Senior Manager, Applied Science, heading the Amazon Alexa Machine Learning/Speech Recognition group in India. He obtained PhD from the Department of Electrical and Computer Engineering, Center for Language and Speech Processing at the Johns Hopkins University, Baltimore, USA in 2012. And Master of Engineering in Signal Processing from the Indian Institute of Science, Bangalore, India in 2006. Sri holds several US patents, and also published in multiple international conferences and journals. He also served as an area chair for the Interspeech-2018 conference.

Lokesh Boregowda

Lokesh Boregowda, Director, Vision Research Group, Samsung R&D India, Bangalore, received the B.E. and M.E. degrees from Bangalore University in the area of Electronics and Communication in 1991 and 1994 respectively. Further, he pursued doctoral research in the area of Computer Vision and Image processing at the Indian Institute of Science, Bangalore and obtained the Ph.D. degree in 2001. Currently leading the design and development of AR Vision Solutions, for Samsung flagship and mid-mass smart-phones, has been instrumental in pioneering advanced research in computer vision solutions, during the past 7 years. He started the Vision Research team in 2014 and built a strong team, which has delivered innovative research to market solutions. These include Virtual Shot, Panorama Shot, Motion Panorama, AR Measure, AR Doodle, all on Samsung mobile devices, and also initiated long term cutting edge research in Stereo Processing, Depth Intelligence, 3D Reconstruction of Objects & Human Avatar, Volumetric Analysis & Representation, Gestures for Human Robot Interaction, etc.. with many patented ideas.

Before the work at Samsung, post research at IISc, he worked at Honeywell Technology Solutions Laboratories (HTSL) from 2002 to 2013. At HTS, he was one of the founding members of the Computer Vision and Signal Analysis Group at the erstwhile Research & Technology group of the HTSL, Bangalore. He was actively involved in driving innovation initiatives at various forums in HTS, himself being a prolific innovator, leading to being recognized, as HONEYWELL FELLOW in 2008, one of the youngest to do so. He has been an active researcher winning numerous innovation awards over the years. He also brings with him teaching experience of around 7 years at the graduate and under-graduate levels. His areas of interests include Computer Vision, Pattern recognition, Signal Analysis & Processing, Texture Analysis, Robotic Vision, Video-Based Surveillance, Industrial Imaging, Haptics, Assistive Technologies, Human Factors, Gestures, Affective Computing etc.. He is also currently serving as the Member of the EXECOM at IEEE-Signal Processing Chapter, Bangalore, India. He is on the advisory boards, academic councils and research review committees of various colleges and universities across India. He is also serving as reviewer at ICIP, ICASSP and SPIE from past 5+ years

Jay Gubbi

Jayavardhana Gubbi received the Bachelor of Engineering degree from Bangalore University, India, and the Ph.D. degree from the University of Melbourne, Australia, in 2007. For three years, he was a Research Assistant at the Indian Institute of Science (IISc), Bengaluru; Postdoctoral Research Fellow, an Australian Postdoctoral Fellow - Industry (APDI) and Senior Research Fellow in the University of Melbourne. He has coauthored more than 85 peer reviewed papers, and patented 15 ideas mainly in computer vision related technologies. He is currently a Senior Scientist in TCS Research and Innovation – Bengaluru and leads the Machine Vision group in TCS Research and Innovation. In this role, he performs the dual role as a researcher building new computer vision capabilities and as a solution lead responsible for technology transfer to multiple verticals.

Ganesan Thiagarajan

Ganesan Thiagarajan received his Masters and Ph.D. degrees in Electrical Communication Engineering from Indian Institute of Science (IISc), Bangalore, in 1994 and 2014 respectively. He has 25+ years of experience in the field of Signal processing and Communication system design. His research interests are in Coding theory, MIMO Signal processing and their applications in Communication systems. He is currently the C.T.O of MMRFIC Technology Pvt. Ltd., Bangalore. He was elected as IEEE Senior member in 2013. He is the chair-elect of EEE ComSoc Bangalore chapter for the year 2020.

SPEAKER TITLE OF THE TALK
Sarma Gunturi Wideband RF Sampling Transceivers for High Performance 5G Wireless Base Stations
Raghavan Subramanian Video Analytics: Technology, Applications and Deployment Challenges
Harish Arsikere Amazon Alexa: Challenges in Speech Recognition and Potential Solutions
A Anil Kumar Sub-Nyquist Sensing Architectures for White-Space Detection
Rajeev Jain 5G+AI: The Ingredients Fueling Tomorrow’s Technology Innovations
Krishna A. G. From Human Drivers to Autonomous Vehicles -- Making Roads Safer
Umakishore G. S. V. 5G RAN Virtualization

Wideband RF Sampling Transceivers for High Performance 5G Wireless Base Stations

Sarma Gunturi, Principal System Engineer, Texas Instruments India, Bangalore
Sarma Gunturi

Sarma Gunturi is a Principal Systems Engineer and Senior Member of Technical Staff with the Analog Signal Chain department at Texas Instruments, Bangalore. He received his B.Tech. degree in Electrical Engineering from IIT, Bombay, in 2000, and M.S. in Electrical Engineering from University Of California Los Angeles in 2002. After his M.S. he joined Texas Instruments (TI) in 2002 and, at TI, his work has been in the field of Signal Chain Architectures, Signal Processing Algorithms and Communication Systems for 4G/5G transceivers, WLAN and Heart Rate Monitoring.

Sarma has developed architectures and signal processing algorithms for various highly integrated, high performance and low power consumption system-on-chip solutions for mobile WLAN and wireless base stations. He holds over 15 patents with the US Patent Office and is the recipient of the INAE Young Engineer Award in 2014.

 Abstract: 5G communication technology offers high data rates by increasing the spectral efficiency, using wider bandwidth and multiple frequency bands. The 5G base stations employ beamforming and massive MIMO techniques that requires wideband integrated transceiver System on Chip (SoC) solutions with multiple transmit, receive and feedback channels, which simultaneously achieves high performance, low power consumption and small form factor. The architecture chosen for the integrated transceiver has direct impact on various system aspects like bandwidth support, signal processing algorithms to calibrate RF/Analog impairments and power consumption. Zero-IF and RF Sampling architectures are the possible choices for high performance, integrated transceivers. A Zero-IF architecture is, typically, more suited for narrow bandwidth, single band systems. However, Zero-IF transceivers suffer from RF/Analog impairments of IQ mismatch and LO leakage which require signal processing algorithms in digital for the impairment compensation. In this talk, we discuss the system level challenges associated with the signal processing algorithms for IQ mismatch (IQMM) and LO Leakage compensation in Zero-IF transceivers to meet the stringent performance requirements of 5G base stations. We then present the RF sampling architecture in which the RF signal is directly sampled with a high speed, multi-GSPS ADC or DAC. The RF sampling architecture avoids the challenges associated IQ mismatch and LO leakage compensation, enables high performance wide bandwidth transceivers with multi-band transmission/reception capability and is well suited to meet the system demands in different 5G frequency bands of FR1 (<7GHz) and FR2 (>24GHz).

Video Analytics: Technology, Applications and Deployment Challenges

Raghavan Subramanian, Chief Technology Officer, Allgovision Technologies Private Limited, Bangalore
Raghavan Subramanian

Raghavan works in Allgovision Technologies Private Limited, where he has been serving as the Chief Technology Officer for the last 4 years. Before that he co-founded two startup companies in medical diagnosis in Bangalore. He has worked in Motorola Corporate Research Labs, Schaumburg, USA and Motorola Bangalore doing research in multimedia processing and has several patents and publications in his name. Raghavan has a B.Tech in Electronics and Communication Engineering from IIT, Chennai and a Masters in the same field from IISc, Bangalore.

 Abstract: With the advent of AI, computer vision has become an effective approach for object detection, recognition, and classification and sometimes its accuracy exceeds human capability. This has led to a range of applications that effectively solves real life problems. There has been large scale deployment of cameras by governments and enterprises to improve governance, security, and productivity. This talk highlights the challenges on the ground that must be overcome to translate the technology to an effective solution. The computer vision technology and its use cases in video analytics are discussed first, with a focus on security and governance. The rest of the talk is aimed at the following goals: (1) To have a deeper understanding on video analytics and its applications, (2) To learn about the tools of the trade and development life-cycle of a feature, (3) To understand challenges in the real world where even the best (theoretical) models can fail, (4) To appreciate the techniques such as fine tuning, custom training, data generation other algorithms to overcome the challenges, (5) To understand how a variety of architectures from the edge device to the cloud help to provide the optimum solution for varied situations, (6) To learn about how to minimise the carbon footprint, and trends in improving power efficiency of AI technology via advances in the science, hardware, and software, and finally (7) To learn about the recent trends in video analytics.

Amazon Alexa: Challenges in Speech Recognition and Potential Solutions

Harish Arsikere, Alexa Speech Recognition, Amazon, Bangalore, India
Harish Arsikere

Harish Arsikere is a Senior Speech Scientist at Amazon and a member of the Alexa Speech Recognition team in Bangalore. His research interests span several areas of speech technology including acoustic modeling and adaptation for ASR, multilingual modeling, prosody and human-computer interaction, and end-to-end/all-neural systems. Before joining Amazon, Harish spent two years with Xerox Research Centre, India, where he contributed to their speech recognition and analytics platform. He holds a PhD degree in Electrical Engineering from University of California, Los Angeles (UCLA) and a master’s degree in Electrical Engineering from Indian Institute of Technology, Kanpur. Harish has published actively in flagship speech conferences such as Interspeech and ICASSP and in reputed journals such as JASA, Speech Communication and IEEE Signal Processing Letters.

 Abstract: Automatic speech recognition (ASR) is the front-end of Alexa’s AI pipeline. Therefore, in order to ensure that a large number of end users have smooth and effective interactions with their Alexa-enabled devices, it is important to achieve high ASR accuracy in a variety of acoustic conditions and for a variety of use cases. Building such high-quality, state-of-the-art, consumer-grade ASR systems comes with several technological challenges and poses interesting research problems. Most challenges/problems stem from customer-facing issues or the constraints imposed by product design, and this talk covers some of the key challenges that we have been trying to address since Alexa’s launch in 2014 – building ASR systems for new languages using small amounts of human-transcribed data, making models robust to noise, accents and speaker variability, continuously improving ASR accuracy without proportionally increasing human transcriptions, and so on. Solutions to some of the problems are also discussed based on recent advancements emerging from Alexa’s speech recognition group, e.g. multilingual modeling to boost the performance of new or under-resourced languages, using wakeword segments to improve robustness to background speech, semi-supervised learning to reduce human-transcription requirements, etc. The talk aspires to offer a glimpse into the real-world, product-focused challenges that one may encounter in the process of building industry-scale ASR systems.

Sub-Nyquist Sensing Architectures for White-Space Detection

A Anil Kumar, Senior Scientist, TCS R&I, Tata Consultancy Services.
A Anil Kumar

A. Anil Kumar received the BE degree in Electronics and Communications engineering from Bangalore University, India in 2002 and Ph.D. degree in electrical and electronic engineering from Nanyang Technological University, Singapore in 2011. Presently, he is a Senior Scientist with TCS Research & Innovation, Bangalore, India; Prior to TCS, he was with Accord software and systems Pvt. Ltd, Bangalore, India during 2002 - 2005, Panasonic Singapore laboratories pte. Ltd, Singapore during 2010 - 2011 and with Temasek Laboratories@Nanyang Technological University, Singapore during 2011 - 2015.

His research interests lie in the broad areas of signal processing, communications and machine learning


 Abstract: With the allocation of available spectrum to newer communication technologies, the RF interference on RADAR band has increased. Due to this, spectrum sharing RADAR is gaining lot of popularity these days. While several ideas are proposed for spectrum sharing RADAR, one based on using white spaces is gaining lot of attention. The key block in such system is the “wide-band continuous spectrum sensing” which detects the white spaces both in frequency and angular spectrum domain. Due to the bottleneck on high rate sampling and processing, it is preferable to sense at sub-Nyquist sampling rates. The need for detection of white spaces not only arises in the above mentioned spectrum sharing cognitive radar, but also in other application areas like IEEE802.22 WRAN which uses white spaces in TV spectrum for communication.

In this brief talk, we shall briefly introduce the contemporary architectures for identifying these white-spaces and will outline our recent contributions in this area. While, many of the prior architectures impose stringent assumptions on the range, sensor array geometry etc., which makes them difficult to deploy in practice, we outline our approaches in relaxing many of these constraints greatly and arriving at schemes more suitable for practical deployment, coupled with good performance metrics. Finally, an overview and latest developments in adopting these architectures for non-contiguous sensing, highlighting the incorporation of the intelligence to the receiver through reconfigurability is also touched upon.

5G+AI: The Ingredients Fueling Tomorrow’s Technology Innovations

Rajeev Jain, Professor Emeritus - UCLA, Senior Director (Technology) - Qualcomm
Rajeev Jain

Rajeev Jain did his B.Tech from IITD in 1978, and Ph.D. from K.U. Leuven in 1985. He was a post-doc at UC Berkeley, and subsequently a Professor at UCLA. He is a co-recipient of the 1992 Best Paper Award from the IEEE Signal Processing Society. He took early retirement from UCLA and has been with Qualcomm since 2011, where he initially led research in sensor machine learning resulting in a US Patent on Context aware system with multiple power consumption modes. Currently he is leading research in AI for chip design. Rajeev is also a Fellow of the IEEE.

 Abstract: Microprocessors, Signal Processing and Wireless Communications have revolutionized our world in the past 40 years, transforming the way we work, play and interact. These technologies have created the internet and smartphones, which give us remote access to information and people that has proven vital to survival in current times. While signal processing and communications can bring the information and data to us, we still have to process that information and act on it. This is where AI is bringing another revolution, by teaching machines to draw inferences from the data and even take actions on our behalf. This frees us up for more creative tasks. For AI to bring its benefits in all spheres of our lives we need a communication network that can efficiently move the data from the source to the AI processor, whether that is in the cloud or in our handheld device, and move inferences and actions from the AI processor back to the source. This means communicating with the lowest possible energy while meeting latency, throughput and capacity requirements. This is what 5G provides.

On the other hand, 5G devices can also benefit from AI based optimizations that are more powerful than classical heuristic algorithms. An example of how AI can advance device experiences is better beam management for the purpose of robustness or throughput with a high power efficiency.

This talk will be an overview of opportunities that AI and 5G are creating, especially in the mobile wireless world.

From Human Drivers to Autonomous Vehicles -- Making Roads Safer

Krishna A. G., Co-founder and CEO, LightMetrics Technologies Pvt. Ltd.
Krishna A. G.

Krishna co-founded LightMetrics with 5 colleagues from Nokia Research in 2015. At LightMetrics, he is responsible for business development and actively contributes to the technology stack. Prior to LightMetrics, he has worked in camera technology, computational imaging, computer vision and machine learning at companies such as Nokia, Insilica and Emuzed. He has 18 granted patents and is an inventor in over 45 patent applications. He is an alumnus of IISc, having obtained his MSc Engineering from ECE department, having obtained his undergrad degree from University Visvesvaraya College of Engineering in Bangalore earlier.

LightMetrics' real-time video analytics using AI on the edge makes driving safer and helps understand driver behaviour better. This helps fleets become safer and more productive. RideView platform powers the advanced video telematics offerings of some of the biggest telematics service providers in North America. The highly optimized and resource efficient perception stack is helpful in making various levels of autonomy more accessible. With data from extensive commercial vehicle deployments in North America, commercial auto insurance can explore the possibility of reducing loss, understanding risk better and managing claims faster with lesser fraud. For more information, please visit https://www.lightmetrics.co

 Abstract: Approximately 1.35M people die as a result of road accidents each year, and yet more than half of people who drive think they are better than 90% of other drivers! The gap between how we think we drive and how we drive comes from a lack of self awareness and data.

Autonomous vehicles are the invention everyone is waiting for - and have been waiting for some time now. While we do not see autonomous vehicles zipping around as promised yet, the technologies are already keeping us safe today.

In this talk, we will explore how technology can make driving safer here and now. We will discuss how perception technology from AVs can be used today, in reducing accidents by helping human drivers without having to wait for autonomous vehicles to make roads safer.

There are two aspects to keeping a driver safe - on the road while driving (real time alerts and technologies like Auto Emergency Braking) and changing habits through coaching and increasing self awareness when not driving. We will discuss the key technologies needed to enable this and going forward, to make autonomous driving mainstream.

Staying in the present, we will present case studies of how certain technologies built for autonomous vehicles are already making roads safer for all of us even before conditional or full autonomy becomes mainstream.

5G RAN Virtualization

Umakishore G. S. V., SRIB (Samsung R&D Institute India-Bangalore)
Umakishore G. S. V.

Umakishore is working as Director and head of Advanced Modem Group at Samsung R&D Institute India-Bangalore. He has more than 20 years of experience, significantly, in wireless communications domain developing 2G,3G,4G and 5G solutions. Presently his group is focussing on 5G NR development for both virtualized and non-virtualized systems. He holds more than 30 patents in wireless communications area. He is also Samsung certified SW architect. He received more than 20 awards in SRIB and also from Samsung Electronics HQ. He received his M.E. degree in signal processing from Indian Institute of Science in 2001. He is currently pursuing PhD from Indian Institute of Technology, Madras. His main research interests are in signal processing & communications, and modem design for next generation wireless systems.

 Abstract: 5G NR (New Radio) is a new radio access technology (RAT) developed by 3GPP for the 5G (fifth generation) mobile network.

Virtualization of radio access network (RAN) is emerging topic, and we see a rapidly growing interest 5G RAN virtualization. vRAN is an essential next step into the 5G future. 5G is evolving fast and virtualization is key to its success. It is important for signal processing and communications engineers to note this important transition towards SW oriented approach to prepare and facilitate this transition through their research. This talk is aimed at providing overall view of 5G RAN virtualization and its various aspects to signal processing and communication engineers.

In this talk we briefly discuss about 5G NR key usage scenarios, Virtualization concepts, 5G NR architecture, advantages & challenges of RAN virtualization more specifically related to 5G.

Various challenges related to fronthaul capacity, signal processing challenges in transmit and receive paths and need for HW acceleration are discussed. We also discuss about Virtual Machines & containers and similarity or difference between them.

Finally other challenging research topics, specifically AI in wireless, which are more applicable for virtualized cloud based systems.

Copyright © SPCOM 2020