Research Areas

Current Research

Mobile Edge Computing

Fig.1: Illustration of offloading tasks to MEC servers.

Mobile edge computing (MEC) is one of the emerging paradigms for next generation communication systems. MEC provides a solution to meet the ever-increasing demands of low latency and increased computation power by advanced applications such as augmented reality (AR), virtual reality (VR), and vehicle-to-cloud (V2C) networks. Each application has its own requirements. For example, AR and VR applications require a latency of less than 20ms, and V2C enabled networks demand latencies as low as 10ms. MEC caters to such strict demands by bringing the server to the edge of the network, thereby decreasing network latency.

Handsets that cannot handle low-latency, computationally-intensive applications due to their limited computation capabilities and energy constraints can now offload their tasks to MEC servers as depicted in Figure 1. With handsets capable of supporting multiple radio access technologies (RATs), such as wireless local area network (WLAN), 4th generation long-term evolution (LTE), and 5th generation new-radio (5G-NR), they can access multiple MEC servers through these multiple RATs. Each RAT has a different access technology and its own capabilities. For example, LTE and 5G-NR provide dedicated channel resources by reserving orthogonal subchannels for every user admitted into its network. On the other hand, WLAN adopts a random-access method based on the IEEE 802.11 media access control (MAC) protocol, in which users contend with each other to access the channel. While a cellular network can accommodate up to a certain number of users, the WLAN places no such hard restriction. However, the WLAN access delays increase as the number of contending users increases.

In the MEC literature, a common assumption which is prevalent is ignoring the WLAN contention delays or modelling the WLAN like a cellular network by limiting the number of users that can access it. We aim to work on this research gap by accurately modelling the 802.11 MAC protocol. Thereafter, we focus on developing policies that determine when a user should offload its task as well as which RAT should it choose to process as many tasks as possible with minimal latency.

Members: Rushabha B.

Underlay Sectrum Sharing

Fig.1: Illustration of underlay spectrum sharing in which a PU shares its spectrum an SU.

Energy-efficiency (EE), which is defined as the number of bits that can be communicated reliably per unit energy consumption, is one of the key performance indicators while designing the next generation wireless communication systems. For example, 5G systems are envisaged to achieve a hundred times higher EE than 4G systems. This is because these systems are required to support applications that demand high data rates, which in turn consumes high energy. Considering the limited battery capacity of the devices, enabling them to communicate larger number of bits per lesser energy consumption is the need of the hour.

Efficient utilization of spectrum is equally important as efficient utilization of energy while designing a wireless communication system because one of the major challenges that we face today is the scarcity of the available spectrum, especially in the sub-6 GHz band. This makes the underlay spectrum sharing a promising technology in today’s wireless communication systems. In it, a high priority primary user (PU) shares its spectrum with one or more low priority secondary users (SUs) as depicted in Fig. (1). For example, citizens broadband radio service (CBRS) and 5G NR unlicensed utilize spectrum shared by military radar services and satellite services, respectively. However, this brings new design challenges as the constraints on the SU to protect the PU from interference degrade its performance.

Transmit antenna selection (TAS) can be employed to mitigated the performance degradation caused by the interference constraints at the SU. It is a low-complexity and low -cost, multiple antenna technique that has been adopted in LTE and IEEE 802.11n. In it, one of the transmit antennas is selected based on the channel conditions for transmission as shown in Fig. (2). This reduces the required number of radio frequency (RF) chains. An RF chain consists of components such as filter, mixer, digital-to-analog converter, and power amplifier. Hence, using lesser number of RF chains reduces the hardware complexity and cost to a great extent.

Transmit Antenna Selection

Fig. 2. Illustration of TAS, in which the transmitter selects antenna s from Nt antennas.

Considering these factors, we focus on developing an optimal rule that selects antenna and its power at the transmitter side of the SU such that the EE of the SU is maximized. The rule also ensures that the interference at the PU caused by the SU is within the tolerable limit. We also develop efficient algorithms to determine the parameters involved in the rule.

Members: Suji N.

5G has revolutionized traditional mobile communications through its innovative architectural changes, mainly the separation of control and user-plane functions. It uses the service-oriented architecture (SOA) that enables the virtualization of the physical network into multiple logical networks or network slices, in contrast to the traditional ‘one-size-fits-all’ architecture, by leveraging the benefits of software-defined networking (SDN) and network function virtualization (NFV). Thus, network slicing is conceptualized as the creation of multiple virtualized logical networks over a shared physical network. The logical networks spanning across 5G radio access network (RAN), edge, and core can be connected to form an end-to-end network slicing for application-specific service provisioning. Figs. 1 and 2 show illustrative examples of network slicing, where multiple logical networks are created based on application-specific requirements.

Fig.1: Concept of network slicing
Fig.2: Example of end-to-end network slicing (source: Nokia)

ETSI’s NFV architecture and network management and orchestration (MANO) stack have defined the baseline functionalities and application programming interfaces (APIs) that can be used for network slicing. The MANO framework consists of three primary components – virtual infrastructure manager (VIM), VNF manager (VNFM), and NFV orchestrator (NFVO). The VIM and VNFM manage the virtualized resources and individual virtual network functions (VNFs), respectively. Whereas NFVO manages network services associated with applications by interconnecting the VNFs. The open-source MANO (OSM) is an orchestration and management system that provides a production-quality MANO stack for NFV. Figs. 3 and 4 show an example of next-generation 5G architecture for end-to-end network slicing using OSM and its interaction with VIMs and VNFs, respectively.

Fig.3: Example of next-generation 5G architecture (modified from OSM white paper)
Fig.4: OSM interaction with VIMs and VNF
(source:https://osm.etsi.org/docs/user-guide/01-quickstart.html)

Our ongoing development of an end-to-end network slicing platform for 5G at the lab uses OSM ( https://osm.etsi.org/docs/user-guide/index.html") as network orchestrator, where OpenStack is being used as VIM. To deploy 5G core functionalities, free5Gc (https://www.free5gc.org/) is being used in our implementation. However, we have a plan to include other VIMs as well, such as vim-emulator and Amazon Web services (AWS). The developed platform will be used for prototyping the ongoing research efforts in the lab on end-to-end network slicing in 5G. Fig. 5 shows the GUI of OSM through which network administrators can monitor network services in addition to the standard CLI

Network Slice Management and Orchestration in 5G
                  Fig.5: OSM graphical user interface

Members: Samaresh Bera

D2D Communication

Fig. 3. Illustration of a D2D-enabled Cellular Network with BS, CUs and D2D pairs.

Device-to-device (D2D) communication is a promising solution for next generation cellular systems, in which D2D users share the available spectrum with the cellular users under the control of the base station. The D2D users can bypass the base station and can communicate with each other directly. D2D communication is practically appealing because it reduces the traffic load on the base station, improves frequency reuse, increases cell throughput, and reduces latency.

Given a set of user equipments, some of which are D2D pairs and the rest are CUs, several new challenges such as mode selection, user scheduling, rate adaptation, subchannel allocation, and power control arise. Mode selection involves determining whether an subchannel should be allocated to only a D2D pair (Dedicated Mode); to only a cellular user (Cellular Mode); or to both together (Underlay Mode). A critical problem that arises due to the addition of D2D users in the cell is that, in underlay mode, additional co-channel interference due to simultaneous data transmissions from the cellular user and the D2D pairs reduces the cell throughput.

A critical and common assumption is that the complete instantaneous channel state information of all the links is available at the base station including the channel state information of the interference link between cellular user and D2D receiver. This assumption increases the feedback overhead in the system. Hence, our focus is to analyze the methods that reduce the interference link feedback overhead in the system, optimal resource allocation by the base station when it has partial channel state information, and an evaluation of their ability to improve cell throughput.

Members: Sai Kiran B., Bala Venkata Ramulu G.

In-band full duplex enhances spectral efficiency and reduces latency of wireless network by enabling simultaneous transmission and reception of the signal at the wireless devices in the network over the same channel.It is a promising technology that enhances the performance of the wireless base stations with small coverage areas. Therefore, it is one of the candidate technologies being considered for 5G cellular networks.

However, several new system design challenges arise in implementing full-dupex in cellular wireless networks. It introduces new interference scenarios in the network. The simultaneous transmission and reception at the wireless device causes transmission power leakage to the receiver of the device, which isknown as self-interference. The self-interference at the receiver degrades signal to noise interference ratio of the received signal and can even drown out the received signal if not cancelled. This self-interference is mitigated by advanced circuit design techniques such as the analog and digital domain signal cancellation, and antenna isolation.Another new source of interference arises because the uplink and downlink users communicate simultaneously over the same channel in networks that employ full-duplex base stations. The simultaneous presence of the uplink with the downlink over the same channel causes cross-link interference to the downlink user.This severity of this interference depends on the scheduling and power control algorithms used at the base station. We are working on the design and performance analysis of cross-link interference-aware radio resource allocation algorithms for scheduling usersand controlling their transmit powers and transmit rates.

Members: Ramakiran

Members: Sayantan Adhikary and Nomaan A. Kherani

Previous Research

Base Station-Side Estimation

Fig.1: Illustration of Base station-side estimation

Current and next generation orthogonal frequency division multiplexing-based wireless cellular systems strive to maximize spectral efficiency and meet the increasing demand for higher data rates despite being severely constrained by the limited spectrum availability. Rate adaptation and scheduling are two key enabling techniques employed to achieve these challenging goals. In them, the determination of the user the base station (BS) transmits to and its modulation and coding scheme (MCS) for downlink transmission over a group of subcarriers, which is called a subchannel, is driven by the channel conditions. To do so, the BS requires feedback of channel state information (CSI) from the users in the uplink.

In order to ensure that the feedback overhead does not overwhelm the uplink, several reduced feedback schemes have been proposed in the literature. However, reduced feedback leads to a loss in throughput since CSI available at the BS is not sufficient to effectively carry out rate adaptation and scheduling. We propose novel BS-side estimation techniques to mitigate this loss in throughput. These techniques incorporate statistical information such as mean signal- to-noise ratio and correlation across SCs along with the fed back CSI to systematically determine the MCS and user for transmission.

We consider the practically relevant best-M and threshold-based feedback schemes. In the former, each user feeds back CSI only for its M strongest SCs, while in the latter, each user feeds back the index of the highest rate MCS at which it can reliably receive for each SC. We develop two BS-side estimation techniques, namely, a minimum mean square error estimator- based approach and a throughput-optimal rate adaptation and scheduling policy. The proposed techniques significantly improve the cell throughput without requiring any additional feedback. The throughput-optimal policy also leads to a new benchmark for the achievable cell throughput for the respective feedback schemes. Further, for the threshold- based feedback scheme, it decouples the resolution of feedback from the number of MCSs available at the BS. This enables a system designer to reduce the feedback resolution without lowering the cell throughput by provisioning for more MCSs at the BS.

Members: Jobin Francis, Vineeth Kumar

Distributed Selection Algortihms

Fig. 2. The metrics of nodes 2 and 3 lie between the two thresholds in the first slot, resulting in a collision.

Opportunistic selection which aims to select the best node from a set of available nodes in order to improve the overall system performance has gained attention in many wireless systems such as relay-aided co-operative communication systems, wireless sensor networks, and ad hoc networks. In it, each node possesses a real-valued metric based on which the best node is selected. However, a major challenge that needs to be tackled here is that the nodes are geographically seperated from each other and a node cannot decide by itself if it is the best node. Distributed selection algorithms such as timer-based back-off algorithm and splitting-based selection algorithms have therefore been proposed for this.

The splitting-based selection algorithm is a time-slotted algorithm in which each node locally decides to transmit in a slot if its metric lies between two thresholds. At the end of each slot, a coordinating node called sink feeds back to all the nodes one among the following outcomes: idle (if no node transmitted), success (if exactly one node transmitted), or collision (if two or more nodes transmitted). Based on the feedback, the nodes update their thresholds and the algorithm continues. A success feedback implies that the best node has been selected; hence the algorithm terminates.

The splitting-based selection algorithm has attracted a great deal of research interest due to two reasons (i) it guarantees to select the best node (ii) it is a fast and scalable algorithm as it provably selects the best node in 2.467 slots, on average, even when the number of nodes tends to infinity. However, several other problems related to splitting algorithm remain open and we focus on developing efficient mechanisms to solve these.

Members: Reneeta Sara Isaac

Energy Harvesting Wireless Sensor Networks

Fig. 6. A WSN with EH sensor nodes and a fusion node.
Ordered Transmissions Approach

Wireless sensor networks (WSN) are finding increasing applications in environmental monitoring, military surveillance, industrial automation, infrastructure security, smart homes, and intelligent transportation. They consist of sensing nodes that probe the environment and report the sensed data to a fusion node. In the practical deployment of WSNs, running cables to power the sensors is often cumbersome or impractical. Hence, the sensor nodes are equipped with an energy storage device such as a pre-charged battery.

In many applications, the most significant source of energy consumption comes from message transmission. As the sensor nodes transmit data to the fusion node, the energy stored in the batteries gets depleted and eventually the sensor network dies. Therefore, ensuring energy-efficiency is crucial to increase the lifetime of the network. Ordered transmissions is an approach to improve the energy-efficiency. In it, more informative messages are transmitted first followed by lower informative messages. The transmissions are halted upon accumulating sufficient evidence at the fusion node. This reduces the number of transmissions by a factor of at least two without degrading the system's detection performance.

Energy harvesting (EH) is a different approach to address the issue of limited network lifetimes. In the WSNs with EH, the sensor nodes harvest energy from natural or man-made sources in the environment such as solar, thermal, electromagnetic, or mechanical. The harvested energy is used to replenish the batteries of the sensor nodes. Therefore, an EH sensor node no longer permanently runs out of energy in its battery. However, it can occasionally run out of energy, in which case it fails to transmit its messages to the fusion node. These networks, thus, promise perpetual operation and have attracted considerable attention recently.

We focus on the ordered transmissions approach in which some of the transmissions can be missing. This is motivated by the use of EH in WSNs. The absence of the readings requires a new set of decision rules to be used at the fusion node. We develop a new timer scheme to order the transmissions and use the extra information gained from ordering to setup new decision rules. These new rules not only reduce the average number of transmissions but also give better detection performance compared to that of the conventional approach where the transmissions are not ordered.

Members: Sai Kiran P.

BLE Protocol EH WSNs

Fig. 7. Number shows an example of an energy harvested wireless sensor network deployed inside an Airplane environment.
Bluetooth Low Energy (BLE) Protocol EH WSNs

The aim of the project is to design and deploy an energy harvesting wireless sensor network for airplane environments. Specifically, we are interested to increase the security features in an airplane by monitoring the status of the overhead storage using energy harvested sensor nodes. In order to save energy further, the sensor nodes use Bluetooth-Low-Energy (BLE) protocol for communication. As a result of lack of professional BLE simulators, our task is to develop the BLE protocol module in Network Simulator-3 (NS-3). Our next objective is to study the feasibility of such an energy efficient network for a given scenario.

Members: Debanjan Sadhukhan, Chinmay Bhat

Antenna Selection

Fig.1: Illustration of training procedure for receive antenna selection

Antenna selection (AS) provides a low hardware complexity solution for exploiting the spatial diversity benefits of multiple antenna technology. In receive AS, the receiver does not receive and process signals from all its receive antennas. Instead, it dynamically selects a subset comprising antennas with the best instantaneous channel conditions to the transmitter, and only processes signals through them. This enables the receiver to employ fewer expensive radio frequency (RF) chains, which consist of components such as a low noise amplifier (LNA), down-converter, and analog-to-digital converter (ADC). Similarly, in transmit AS, the transmitter employs fewer transmit RF chains than the available number of antennas. In addition to its lower complexity, AS is attractive because it provably achieves full diversity order, i.e., it provides as much robustness against fading in wireless channels as a full complexity receiver.

Our research has focused on training for receive antenna selection. The training procedure is the crucial first step that enables AS in practice. It is through training that the receiver estimates the channel gains of the various antennas and determines which antenna subset to select. The inevitable presence of noise during the estimation process and the time-varying nature of the wireless channel can lead to inaccurate selection and incorrect data decoding, both of which together increase the overall symbol error rates (SERs) observed in the system.

Cooperative Communications

Fig.2: Multi-relay cooperative communication system,consisting of a source,a destination, and four relays for receive antenna selection

Cooperative communications is a promising technology in which different wireless nodes help each other at the physical and multiple access layers in forwarding messages to their intended recipients. Doing so enables the wireless networks to exploit their inherent geographically distributed nature and their large coverage.

In a relay-based multihop cooperative wireless network, such as the one shown in Fig. 2, one or more relays forward a message from the source to the destination. Ideally, having more relays to forward a message helps improve performance. However, this is practically difficult since tight symbol-level synchronization needs to be ensured at all the separated relays. Selection is a solution that solves this problem. In selection, depending on the current channel realizations, only the best relay, which is the one that has the most favourable channel realization, is selected to forward the message. Selection circumvents the synchronization problem since only the selected node transmits. Yet, despite its simplicity, it achieves the highest diversity order - as was also the case with antenna selection

Splitting-based selection: The splitting-based selection algorithm, which we analysed, is a time-slotted multiple access contention algorithm in which each node autonomously decides whether or not to transmit in a certain time slot. This decision by a node is based on its local channel gain knowledge, which is captured in the form of a real-valued metric. In each slot, only those nodes whose metrics lie between two thresholds transmit. At the end of every slot, the controller feeds back a three-state outcome indicating idle (when no node transmitted), success (when exactly one node transmitted and was decoded successfully), or collision (when two or more nodes transmitted and none could be decoded). The nodes update their thresholds in each slot based on the feedback from the coordinator. The algorithm guarantees that the first node to successfully transmit to the controller is the best node, i.e., the one with the highest metric among all the nodes.

The splitting-based selection algorithm is attractive because it is extremely fast. For example, it can select the best node in just 2.47 slots on average, even in the worst case where the number of available nodes is infinity. It can be speeded up even further by using power control.

Timer-based selection

Fig. 3: Structure of optimal timer mapping

Timer-based selection: The timer-based selection algorithm instead uses a back-off timer mechanism to select the best node. Every node sets its timer as a function of its metric and transmits a packet when its timer expires. The key idea is that the metric-to-timer mapping is a deterministic monotone non-increasing function. This is enough to ensure that the first node that transmits is the best node. The timer-based selection algorithm is attractive because of its simplicity and its distributed nature. However, if two timers that expire within a time Δ of each other collide and cannot be decoded. Therefore, the performance of the algorithm is fundamentally constrained by the vulnerability window, Δ.

Unlike the splitting-based algorithm, it requires no feedback during the selection process. The optimal metric-to-timer mapping, which maximizes the probability of selection success given a pre-specified maximum duration of selection Tmax, was derived by us. It was shown to be a practically feasible staircase mapping, which is depicted in Fig. 3. The optimal 'stair' lengths, as shown in the figure, were also derived.

Selection and Data Transmission Trade-off: In every cooperative system that uses selection, one invariably faces a fundamental trade-off that is related to the selection duration. The larger the selection duration, the more accurate the selection. However, this reduces the relative fraction of time available for data transmission using the selected node. On the other hand, shrinking the selection duration increases the odds that the selection algorithm fails to select the best node. As a result, the system loses out on the spatial diversity benefits of selection. This trade-off was addressed by us. We considered both non-adaptive and adaptive rate and power modes of transmission, and showed that the selection process is often far from ideal and perfect at the optimal system operating point. This is unlike what has been assumed in the literature.

Energy harvesting (EH) networks offer the possibility of a maintenance-free and perpetual network operation. In these networks, the nodes harvest energy from the environment using solar, vibration, thermoelectric effects, and other physical phenomena. Unlike conventional battery-powered nodes that die once their pre-charged batteries drain out, an energy harvesting node can harvest energy and remain available - essentially forever. This is possible provided the communication protocols can be engineered to ensure that no more energy is drained than has been harvested. The use of the energy harvesting functionality motivates a significant redesign of the physical and multiple access communication protocols. This is because minimizing energy consumption ceases to be the dominant goal in the design of the protocols. Instead, the problem requires handling the randomness in the energy harvested and ensuring that the energy required for communication and sensing needs, if any, can be met. Further, the inclusion of even a few energy harvesting nodes promises significant improvements in the lifetime of conventional wireless networks.

Energy Harvesting Sensors

Fig. 4: Single EH node transmission and retransmission model. Each packet must be decoded correctly by the destination within a fixed duration Tm, else, it is dropped

A simple theoretical model that captured the interactions between important parameters that govern the communication link performance of a single EH node was developed. In this work, each EH node is allowed a fixed number of retransmission attempts to send information to a destination, as shown in Fig. 4. Unlike conventional nodes, the packet may not reach the destination (outage) not only because of fading and noise in the communication channel but also because transmission did not take occur due to insufficient energy stored in the battery. The outage probability analysis developed brings out the critical importance of the energy profile and the energy storage capability on the EH link performance. The insights turn out to be different for slow and fast fading channels. The analysis showed that properly tuning the transmission parameters of the EH node and having even a small energy storage capability considerably improves the EH link performance.

Interference Modeling

Fig. 6: Interference in CDMA uplinks

Interference plays a crucial role in code division multiple access (CDMA) based cellular communication systems, which use pseudo-random spreading codes for transmitting data. Therefore, an accurate characterization of the statistics of interference is an essential first step in cellular system design and analysis. While downlink co-channel interference is well characterised, the same is not true for uplink interference from neighbouring cells. This is because the interfering signals from the neighouring cells undergo shadowing and fading in their respective wireless channels. The number of interfering signals is random since the number of users per cell that generate them is random. Power control and cell selection, which affect the interfering users’ transmit powers, further exacerbate this randomness. An alternate characterization of the uplink co-channel interference was developed by us. It showed that the lognormal distribution characterizes the uplink interference statistics well, and is significantly better than the conventionally assumed Gaussian distribution.