Leeds Beckett University - City Campus,
Woodhouse Lane,
LS1 3HE
Dr Nawar Jawad
Senior Lecturer
Dr. Nawar is a lecturer in computer science/ School of Built Environment and Engineering. He has been awarded BEng in electrical and electronic engineering from University of Technology/Baghdad in 2005. He was also awarded MSc with distinction in wireless communication systems and PhD from Brunel University London in 2009 and 2021, respectively.
About
Dr. Nawar is a lecturer in computer science/ School of Built Environment and Engineering. He has been awarded BEng in electrical and electronic engineering from University of Technology/Baghdad in 2005. He was also awarded MSc with distinction in wireless communication systems and PhD from Brunel University London in 2009 and 2021, respectively.
Dr. Nawar is a lecturer in computer science/ School of Built Environment and Engineering. He has been awarded BEng in electrical and electronic engineering from University of Technology/Baghdad in 2005. He was also awarded MSc with distinction in wireless communication systems and PhD from Brunel University London in 2009 and 2021, respectively.
Nawar has worked in industry and in academia, where he worked as a core system engineer for a local mobile operator in Babylon supporting the management of the system. He was Brunel's research team leader in Internet of Radio Light "IoRL" (European funded project for designing and deploying Mobile gNBs with 5G access technologies).
He played a key role in deploying the Home IP Gateway (HIPGW) by exploiting open-source Virtualization Infrastructure Manager.
He has published 20+ Journal and conference papers, all of which in peers review Journals, well-known conferences, and technical papers. He also contributed to 5GPPP architecture white paper about the latest RAN architecture of the 5th Generation mobile network.
He is currently offering consultancy for 6G-BRAINS EU project and involved with several companies, Universities, and operators such as Deutsche Telekom AG (DTAG), Fraunhofer IIS, University of Leicester and many others.
His research interests are in IoT, networking, cloud computing and virtualized environments.
Degrees
PhD
Brunel University London, Uxbridge, United Kingdom | 01 January 2018 - 25 January 2021MSc Wireless communication systems
Brunel University London, Uxbridge, United Kingdom | 01 September 2008 - 30 October 2009BSc Electrical and Electronic Engineering
University of Technology Baghdad, Baghdad, Iraq | 15 September 2000 - 15 July 2005
Languages
Arabic
Can read, write, speak, understand and peer reviewEnglish
Can read, write, speak, understand and peer review
Research interests
Nawar is interested in edge network architecture for IoT systems, he is currently involved with the development of a novel solution to support the massive connections over device-to-device (D2D) assisted highly dynamic cell-free network enabled by Sub-6 GHz/mmWave/THz/OWC and high-resolution 3D Simultaneous Localization and Mapping (SLAM) of up to 1 mm accuracy. This solution is aimed to support smart factories. To enable the deployment of proposed services, its required to enable virtualization platform for hosting such services in dynamic, automated, and flexible environment. The current research is taking place to enable the existence of this platform with the proposed system architecture. The developed technologies will be widely applicable to various vertical sectors such as Industry 4.0, intelligent transportation, eHealth, etc.
Ask Me About
Publications (40)
Sort By:
Featured First:
Search:
5G Internet of Radio Light Positioning System for Indoor Broadcasting Service
Many features of 5G are definitely important to the broadcasting service, including diverse content services such as follow-me TV, video-on-demand, but also gaming, Virtual reality (VR) and Augmented reality (AR) and many others. Meanwhile, those services depend more and more on the user's position accuracy, especially in indoor environment. With the increase of broadcasting data traffic indoors, to obtain a highly accurate position is becoming a challenge because of the impact of radio interference. In order to support a high-quality indoor broadcasting service, a high-accuracy positioning, radiation-free, and high-capacity communication system is urgently needed. In this paper, a 5G indoor positioning system is proposed for museums. It utilizes unlicensed visible light of the electromagnetic spectrum to provide museum visitors with high-accuracy positioning, multiple forms of interaction services, and high-resolution multimedia delivery on a mobile device. The geographic data and the location-related data integrated into the 5G New Radio (NR) waveform are detailed. A general-purpose system architecture is provided and some basic techniques to enhance system performance are also investigated. A preliminary demonstration is built in the laboratory environment. It supports over 45.25 Mbps data rate and a mean positioning error of 0.18 m.
Media Casting as a Service: Industries Convergence Opportunity and Caching Service for 5G Indoor gNB
Fifth-Generation (5G) mobile networks are expected to perform according to the stringent performance targets assigned by standardization committees. Therefore, significant changes are proposed to the network infrastructure to achieve the expected performance levels. Network Function Virtualization, cloud computing and Software Defined Networks are some of the main technologies being utilised to ensure network design, with optimum performance and efficient resource utilization. The aforementioned technologies are shifting the network architecture into service-based rather device-based architecture. In this regard, we introduce Media Casting as a service (MCaaS) and IoRL-Cache service (IoRL-C). The former is a service that proposed as an integration solution between the broadcasting industry and Mobile Network Operators, while the latter proposed as a solution for improving IoRL small-cells caching efficiency. IoRL is an emerging 5G small-cell for indoor environment, which utilises mmWave and Visible Light Communications (VLC) as access technologies, while exploiting Software Defined Networking (SDN) and Network Function Virtualisation (NFV) technologies to offer flexible and intelligent services to its clients. In this paper, we introduce MCaaS and IoRL-C services, we perform simulation work for testing IoRL-C as a proof of concept for both services. The network was simulated using OMNeT++ simulation tool, and we validated the services efficiency by comparing their performance with traditional deployments. We have also examined the cache service performance at different link lengths, and found out that IoRL-C is able to support IoRL small-cells with more than 50+Km separation distances.
Containerization software has become increasingly popular in the last decades as it provides a lightweight operating system-level virtualization for a large variety of purposes, including cloud and edge computing. Even though it provides increased security, internal networking, and application isolation, it also tends to generate additional workload and overhead in comparison with native application deployment. This issue is especially relevant in regards with limited computation power and storage devices, such as mobile and IoT devices. The aim of this paper is to analyse this generated overhead of deploying applications in containerization solutions and its effect on total execution time and energy consumption. To do so, two widely used container platforms, Docker and Podman, were used. To evaluate the performance of the low-level container runtime of a platform, two OCI (Open Container Initiative) compatible container runtimes, namely runC and crun, are compared. This results in a comparison of 5 configurations - docker with runC, docker with crun, podman with runC, podman with crun, and native deployment. A test workload application that simulates CPU load was deployed on a Raspberry Pi single board computer for all configurations. The results show that in the containerization deployment models, container runtime selection seems to have a minor effect on overall execution time and energy consumption, while the container platform significantly affects both metrics. Among container platforms, Podman platform was found to be both faster and more energy-efficient than Docker. It has also been discovered that native application deployment might significantly decrease the energy consumption level at the expense of losing containerization benefits.
Smart Television Services Using NFV/SDN Network Management
Integrating joint network function virtualization (NFV) and software-defined networks (SDNs) with digital televisions (TVs) into home environments, has the potential to provide smart TV services to users, and improve their quality of experience (QoE). In this regard, this paper focuses on one of the next generation services so-called follow me service (FMS). FMS is a service offered by 5gNB to user equipments (UEs) in indoor environments (e.g., home), it enables its clients to use their smart phones to select media content from content servers, then cast it on the nearest TV set (e.g., living room) and continue watching on the next TV set (e.g., kitchen) while moving around the indoor coverage area. FMS can be provisioned by utilizing UEs geolocation information and robust mechanisms for switching between multiple 5G radio access technologies (RATs), based on the intelligence of the SDN/NFV intelligent home IP gateway of the Internet of Radio Light (IoRL) project paradigm. In view that the actual IoRL system is at its early development stage, we step forward by using Mininet platform to integrate SDN/NFV virtualization into 5G multi-RAT scenario and provide performance monitoring with measurements for the identified service. Simulation results show the effectiveness of our proposal under various use case scenarios by means of minimizing the packet loss rate and improving QoE of the home users.
In China, the Great Firewall of China (GFW) blocks or restricts access to various online content and services. To access the Internet without restrictions, many Chinese netizens use circumvention tools that employ encryption and obfuscation techniques to evade detection and filtering of the GFW, which increase the energy consumption of the devices and servers involved. This paper aims to compare the energy efficiency of five common circumvention tools: WireGuard VPN, Shadowsocks, V2ray, Xray, and Trojan-Go. An energy audit is conducted for each tool by measuring the energy consumption on the client side when downloading a specific file from a target server. Based on the empirical results, WireGuard VPN was recommended as the most energy-efficient tool, while Shadowsocks and WireGuard VPN both ranked the highest in the comprehensive score considering both energy and time consumptions.
The technological advances, popularity and complexity of cloud computing systems make the need for cloud simulators evident. Cloud simulators are programs that simulate a real datacenter into your personal computer. Their main objective is evaluating the infrastructure, trying different configurations to find the best one and provide more energy-efficient solutions. By doing all these assessments, these programs consume energy and produce emissions. This paper studies the energy consumption of three Java based simulators respectively CloudSim, Cloud Analyst and CloudReports. All of these applications are extensions of CloudSim and are event oriented. To analyze and achieve a conclusion, descriptive, and inferential statistics are implemented on the data gathered from Joulemeter measurements. According to the results of this experimental study, by conducting the deployments of the same infrastructure, CloudReports is found to consume more energy and produce more carbon emissions compared to the other two programs. This is related to the complete graphical user interface it provides. However, it should be noted that CloudSim consumes almost 9 times less energy and has a command line interface.
The exponential growth of real-time data-centric systems relies on the continuous processing of large data streams through stream processing frameworks. However, these frameworks are resource-intensive, require numerous dependencies and induce high computing loads, resulting in increased energy consumption and carbon emissions. The selection of an appropriate stream processing framework can significantly reduce energy consumption and carbon footprint within a company’s IT infrastructure. While several studies have compared the performance of stream processing frameworks, none have specifically examined them from a sustainability standpoint. This report aims to bridge this gap by conducting a comparison of energy consumption and performance among three different frameworks: Kafka Streams, Apache Flink, and Spark Structured Streaming. To achieve this, we implemented a realworld use case in Java and conducted multiple experiments at varying streaming rates, while monitoring the systems using Prometheus and Grafana. Our findings indicate that, on average, Kafka Streams and Apache Flink exhibit lower power and energy consumption compared to the Spark Structured Streaming module. Among the two, Flink proves to be the most efficient in terms of power for medium to high throughput applications, while Kafka Streams is most suitable when tolerating lower throughput, as both frameworks demonstrate similar power consumption levels. Additionally, we also analysed the CPU and RAM usage of each framework, revealing distinct patterns for each stream processing engine.
In today's era, with the growing threat of global warming, it is essential to incorporate sustainability into deep learning models, specifically when it comes to carrying out heavy tasks such as the classification of images. In general, image data is heavier compared to numerical or even text data. Thus, it is essential to investigate the energy consumption and carbon footprint of these image classification models. To achieve this objective, this research focuses on building an energy-efficient deep learning model used to classify Bengali Lexical Signs. This dataset comprises 10,000 locally collected images that are trained using Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) as the primary model and other CNN pre-trained models, namely DensNet, MobileNet, VGG-16 and Inception V3. Before passing the images through the model, they are pre-processed to remove any noise and to induce robustness; data augmentation is applied. On the primary model, a k-4 cross-validation is also performed. The energy consumption data is collected during the training phase of each model using the Intel Power Gadget tool. From the energy consumption data, the carbon footprint is calculated using the value of the carbon intensity found in the UK for the prior thirty days. Further, a one-way ANOVA test, descriptive statistical studies, and the average power consumption of the processor are carried out using the energy consumption data to understand the behaviour of these models. According to the results, it can be deduced that the pre-trained models, namely Inception V3, VGG-16 and DenseNet, appear to have the lowest carbon footprint, while the primary model CNN-LSTM depicts a higher carbon footprint. Another interesting fact observed in this research is that Inception V3 and DenseNet models exhibit the highest energy consumption compared to that of other models. Therefore, from the study, it can be concluded that high energy is required for deep learning models which are used to process images and for other computer vision applications.
The increased demand and diversity of web applications has brought a significant increase in the energy consumption and carbon footprint of web servers. However, no that many results have been published that study the web server energy consumption and identify which one is the most environmentally friendly. As a result, this study aims to compare the energy consumption and carbon footprint of three popular web servers, including Apache, Nginx and Lighttpd. To get insights into the energy consumption aspect multiple experiments are conducted to see how much energy web servers consume on different workloads. The research results demonstrated that Nginx and Lighttpd are the webservers that consume the less amount of energy compared to Apache. The reason of this result can be attributed to their asynchronous event-driven approach to handle a large number of simultaneous connections which allows Nginx and Lighttpd to use less resource and handle more requests with less overhead compared to the other web servers resulting to lower energy consumption and carbon footprint.
This project aims to investigate the impact of Resource Allocation and Selection Policies on the power consumption of a Cloud Data Centre compared also to it when it is NonPower-Aware. The study will use CloudSim, a well-known simulation tool for cloud computing, to build a virtual environment that simulates a Cloud Data Centre. The main objective of the project is to evaluate the effectiveness of these policies in reducing the energy consumption of the Data Centre while maintaining performance and availability levels. The simulation results will be analysed and compared with the Non-PowerAware scene, to demonstrate the importance of the effective implementation of policies in reducing power consumption. The project aims to contribute to the ongoing research efforts towards achieving sustainable cloud computing by providing insights into the effectiveness of Power Aware Policies in reducing the carbon footprint of Cloud Data Centres.
The use of Internet has boomed in the last decade, nowadays almost everyone owns a smart device that is connected to the Internet. This increase in the online network usage has led to a rise in energy consumption in the ICT sector. Both the end/edge devices and the backbone Internet infrastructure (switches, routers, servers, etc.) need to be constantly powered with a considerable amount of energy. Though, the production of electrical energy proves to be still a very harmful process for the environment, greenhouse gas emissions are discharged in the atmosphere at astonishingly high rates. A reduction in the energy consumption is what we are striving to achieve in the present. Therefore, it is essential to keep track of how much energy ICT applications and programmes use in order to foresee future modelling and device research possibilities. Video steaming started to represent a high percentage in the Internet usage in the last years. This is due to the rise in popularity of consuming digital entertainment over the traditional broadcasting. This research work tries to get some insights into the power consumption of different Internet browsers while videos are streamed at different speeds. To compare, a typical user environment was developed in order to analyse the proposed scenarios.
Visible Light Positioning With Lens Compensation for Non-Lambertian Emission
With greater demands for cost-effective, reliable, and highly accurate positioning, indoor wireless localisation using Visible Light Positioning (VLP) is a promising solution for future networks. One can expect VLP solutions to appear in all environments, from homes to industry; however, the existing literature primarily considers Visible Light Communication (VLC) sources with purely Lambertian emission patterns. To facilitate greater versatility within VLP solutions, this paper considers non-Lambertian sources. It evaluates practical Received Signal Strength Indicator (RSSI) data obtained during the Internet of Radio Light (IoRL) 5G Measurement Campaign conducted in a home environment using non-Lambertian Total Internal Reflection (TIR) lenses, which produce a halo lighting effect. The initial analysis explores the calibration of Lambertian source parameters against datasheet values leading to reductions in the average Positioning Error (PE) of 17% and 3% for averaged and individual RSSI measurement sets, respectively. While this highlights improvements from correct calibration, the Lambertian model proved to be unsuitable for non-Lambertian sources. In the absence of any existing non-Lambertian models, the authors proposed the Halo Lens Compensation (HLC) method to calibrate the considered non-Lambertian TIR sources correctly. The HLC further reduced PE in the calibrated results by 50% and 39%, with mean PE of 3.1 cm and 4.6 cm for averaged and individual RSSI measurement sets, respectively. In conclusion, for VLP using non-Lambertian sources, the existing Lambertian model is unsuitable. However, the proposed HLC is highly effective and achieves positioning accuracy comparable to existing literature using Lambertian sources.
The lack of flexibility and delays in the software delivery processes led to the creation of DevOps practices, such as CI/CD pipelines, to automate the process of building and deploying applications. These tools are widely used today in software development environments. However, there is little research on the sustainability involved in these processes. This study uses Intel Power Gadget to estimate the power consumption of a server running the CI/CD of a Node.js application on two different platforms: cloudbased pipelines from Microsoft Azure cloud, and GitHub Actions. Moreover, the manual build and deployment without automation tools were also calculated for comparison with the CI/CD approach. The study revealed that although not using automation tools generates slightly less energy consumption, there is no significant difference between using cloud-based pipelines and manually building and deploying an application. It is therefore concluded that the use of DevOps could be sustainable when using cloud services and optimizing the pipeline architecture.
Modern world is moving in a tangent such that virtual presence is equally or more important than physical presence. Web development has become so widely spread that anyone looking to have a career in the ICT sector must have good knowledge about it. But the environmental aspects of web development have always been neglected. Studies show that around two percent of total carbon emissions each year come from ICT sectors. In this research, 4 different frameworks of the popular backendbuilding language NodeJS namely ExpressJS, Fastify, NestJS, and Connect have been studied in terms of energy consumption. The experiments are set up in such a way that all the components like database, API services, API tester, etc are present and similar in every framework. Collected data were analyzed by descriptive analysis as well as inferential analysis and found that in terms of energy consumption, services of similar nature take similar energy irrespective of the framework being used. In addition, energy data was converted to GHG emissions with the help of standard conversion factor of 2022 to observe the environmental effects of each framework.
This paper presents the technical performance results of a measurement campaign from a 5G indoor millimeter Wave (mmWave) and Visible Light Communications (VLC) multi component carrier system, which was developed in a Horizon 2020 research project called Internet of Radio-Light (IoRL). The measurement campaign was performed in the famous Integer House laboratory at the Innovation Park in Building Research Establishment in Watford, U.K., which represents a typical European home environment. It includes four field test results: 1) VLC received signal quality measured as Error Vector Magnitude (EVM) against coverage, 2) mmWave received signal quality measured as EVM against coverage, 3) VLC location accuracy against a prescribed grid using received signal strength, 4) Comparison of measured and simulated Electromagnetic Field (EMF) strength against coverage. This measurement campaign not only tests the system concept in a realistic indoor home environment but also provides analysis of the results with practical recommendations on further technical enhancements required to improve the system performance and insights into viable commercial solutions and applications. Other environments in which this technology could be deployed were envisaged as: underground train platforms and tunnels, museums and supermarkets.
Due to the great demand of throughput and reliability for multimedia applications in Fifth Generation (5G) networks, many broadcasting systems adopt Millimeter Wave (mmWave) technology to address the lack of the spectrum resources. As one of 5G-PPP projects, Internet of Radio Light (IoRL) project adopts 40GHz 40GHz mmWave band to support a high-speed and stable Ultra-High-Definition (UHD) television broadcasting service in the indoor environment. Because of the high frequency property, mmWave bands usually suffers from the high path loss and the penetration loss. Thus, in order to overcome these issues, directional antennas are employed to provide additional power gain while increasing transmission distance. However, the mmWave with directional antennas brings additional problems, such as limited transmission angle and more multipath effects. Therefore, in this paper, for better understanding of impact factors on the signal quality and transmission coverage of the directional 40GHz mmWave band in the indoor environment, a measurement campaign is introduced in detail and the channel characteristics are measured and analysed in varying cases. The mainly concerned characteristics are path loss, shadow fading, average Power Delay Profile (PDP), Root-Mean-Square (RMS) delay spread, arrival rate and coherence bandwidth. All Measured characteristic values are summarised in three tables at the end of this paper. Besides of these, as a reference of channel analysis and a metric of signal quality and effective coverage, Error Vector Magnitude (EVM) of received signal in each case is measured and discussed. Moreover, a simulation is performed based on a statistical channel model to validate the measured results.
Machine learning (ML) solutions have surged across various domains, requiring significant specialized expertise to execute. To make ML more accessible, automated machine learning (AutoML) and deep learning (AutoDL) have emerged, simplifying model development without extensive user intervention. While AutoML and DL offer more accessibility, their environmental impact given their increased computational demand, remains unexplored. This study addresses this gap by comparing the energy consumption and performance of five popular open-source AutoML libraries: FLAML, H2O AutoML, AutoGluon, TPOT, and AutoKeras. The experiment results of the study demonstrate a statistically significant difference with 99% confidence using the ANOVA and T-test methods in CPU and GPU energy consumption between these libraries. The study also provides discussion on the importance of considering both performance and sustainability when selecting libraries through a weighted scoring algorithm. The study's key findings show that Autogluon and FLAML offer a balanced approach, achieving good performance while minimizing energy consumption; H2O AutoML excels in model versatility; AutoKeras emphasizes performance over energy reduction; and TPOT excels more for tree-based algorithms, rather than general ML tasks. Future work may include investigating the impact of parameters like early stopping, training-test splits, and hyperparameter selection on energy consumption and exploring these libraries with various datasets to increase the generalizability of results.
Forest ecosystems have long been one of the most important environments for our planet, providing key resources, promoting biodiversity, and fighting against global climate change. In order to facilitate effective forest monitoring and management, artificial intelligence can be used to address the need for data processing for real-time forest supervision systems. In this paper, conditional generative adversarial networks (CGANs) have been explored to synthesize accurate image segmentations of forest aerial images, mapping forested against non-forested areas. With 1000 training, 200 validation, and 100 test images subsetted from Kaggle’s Forest Aerial Images for Segmentation dataset, three CGANs of varying parameter number and upsampling and downsampling layers have been trained and evaluated. The results of training show that the smallest CGAN, with 37x less generator and 4x less discriminator parameters than the biggest CGAN, performed the best with an IoU of 0.701, Dice coefficient of 0.778, pixel accuracy of 0.781, recall of 0.919, and precision of 0.734 in the test set. Using a weighted scoring algorithm comparing inference time in addition to the five aforementioned metrics, the medium CGAN was determined to be the best, with a weighted score of 0.861 closely followed by the small CGAN’s 0.783 score for the dataset. These outputs signify the need for model complexity and dataset size compatibility, the importance quality labelled annotations for GAN conditioning, and most importantly, the potential of CGANs for accurate, automated, and effective segmentation of aerial forest images.
Network and Application Layer Services for High Performance Communications in Buildings
The Internet of Radio Light (IoRL) project has developed a high performance buildings communications system with Mobile Edge Computing (MEC) facilities that can potentially provide intelligent SDN/NFV services over 1G bits/second for each room in the property up to a total of 10G bits/second, with latency from the user terminal to the property gateway of less than 0.5ms and with location estimation accuracy of less than 10 cm. Communications performance at this level of performance and intelligence will allow for innovative application and network layer services so that people can live, work and play from their home instead of being dependent on fossil fuels for physical transportation for their social interactions.
Virtual Gateway: Local Multimedia Services and mobility management for 5G Internet of Radio Light gNB
Mobile Network Operators (MNOs) are handling an increased amount of traffic via mobile networks each year, especially during the recent years, with the emergence of latest generations of smart phones and tablets, which are capable of consuming large amount of data to operate data-rich applications and multimedia items. Data analysists and researchers estimated that 70 - 90 % of the cellular data are consumed within indoor environments. Therefore, MNOs realized the need for improving the network performance for indoor environments. Small cells are being introduced as one of the solutions to improve the network capacity for local area and indoor environments. Internet of Radio Light (IoRL) is a 5G small-cell gNB base station. It is designed to boost UE's data rate and release MNO wireless spectrum resources in indoor environments specifically; IoRLs extends network coverage and enables the deployment of intelligent services near UEs. IoRL base stations innovatively use Visible Light communication (VLC) and millimeter Wave (mmWave) for its radio access network, while utilizing Software Defined Networking (SDN) and Virtualized Network Functions (VNFs) technologies for providing flexible intelligent platform. This paper presents the Virtual Gateway (VGW), which is a virtualized entity that enables an optimized deployment for a cluster of IoRL base stations efficiently. The mobility management discussed, highlighting the role of VGW, especially in the intra-gNB handover procedure. Since the IoRL gNB has not yet deployed with MNO architecture, we evaluated the efficiency of the VGW by exploiting a mathematical module. The obtained results prove the efficiency of the VGW in terms of reducing the overhead signaling that ultimately enabling faster end-to-end communication.
Three-dimensional Access Point Assignment in Hybrid VLC, mmWave and WiFi Wireless Access Networks
To improve data speed and reliability, hybrid wireless networks combine two different Radio Access Technologies (RATs), such as Visible Light Communications (VLC), millimetre wave (mmWave), Wireless Fidelity (WiFi), 4G Long Term Evolution (LTE), etc. The Internet of Radio Light (IoRL) is a cutting-edge system paradigm to combine three RATs for taking advantage the vast VLC and mmWave spectrum with the ubiquitous coverage of WiFi. In this respect, this work introduces a new convex optimisation-based solution method to optimise the three-dimensional (3D) Access Point Assignment (APA) problem of the IoRL system under individual user positioning, priority and minimum Quality-of-Service (QoS) constraints. We use both the IoRL real-world testbed and large-scale Maltab simulations to evaluate that our solution converges in linear time, and attains higher throughput-vs-fairness trade-off than existing efforts.
Internet of radio and light: 5G building network radio and edge architecture
The Internet of Radio-Light (IoRL) is a cutting-edge system paradigm to enable seamless 5G service provision in indoor environments, such as homes, hospitals, and museums. The system draws on innovative architectural structure that sits on the synergy between the Radio Access Network (RAN) technologies of millimeter Wave communications (mmWave) and Visible Light Communications (VLC) for improving network throughput, latency, and coverage compared to existing efforts. The aim of this paper is to introduce the IoRL system architecture and present the key technologies and techniques utilised at each layer of the system. Special emphasis is given in detailing the IoRL physical layer (Layer 1) and Medium Access Control layer (MAC, Layer 2) by means of describing their unique design characteristics and interfaces as well as the robust IoRL methods of improving the estimation accuracy of user positioning relying on uplink mmWave and downlink VLC measurements.
A 5G Radio-Light SDN Architecture for Wireless and Mobile Network Access in Buildings
The Internet of Radio-Light architecture provides both direct WLAN type access to the Internet using 5G RAN as well as access to the Internet via Mobile Networks using a 5G mmWave and VLC Radio Access Network (RAN) within buildings. A SDN is used to manage the various different packet flows between the RAN, the Internet Interface and the Mobile Network User and Control plane interfaces for SmartPhone, Tablet PCs, HDTVs and Virtual Reality headsets within buildings.
5G Internet of radio light services for Musée de la Carte à Jouer
In this paper we present a 5G Internet Radio-Light (IoRL) architecture and services for museums that can be readily deployed because it utilizes unlicensed visible light and millimeter wave part of the electromagnetic spectrum and which is used to provide museums' visitors with accurate location, interaction, access to Internet and high resolution video on a Tablet PC. The paper describes the museum, its related use case scenarios, the user and functional requirements and the IoRL architecture.
Indoor Unicasting/Multicasting service based on 5G Internet of Radio Light network paradigm
Next generation mobile networks designed to offer new features for User Equipment (UE) and to host more services to improve UEs' Quality of Experience (QoE). In this regard, this paper presents the concept, implementation and the components of two of the next generation services so-called: Follow Me Service (FMS) and Multicast Sharing Service (MSS). MSS is a service offered by 5gNB small-cell to UE in indoor environments (e.g. Museum), it enables its clients to use their smart phones in a server-client mode. Server (or host) selects media content from content servers, then casts it to a group of registered clients based on predefined criteria (subscription-based or relative proximity-based). MSS does not rely on UE's smartphone capabilities, rather on network capabilities. MSS can be provisioned by utilising UEs' geolocation information, robust switching mechanism between multiple 5G Radio Access Technology (RAT) and relying on the intelligence of the SDN/NFV Intelligent Home IP Gateway of the Internet of Radio Light (IoRL) project paradigm. Since the infrastructure of the IoRL is not completely ready for deployment, we used Mininet platform to provide performance monitoring with measurements for the service. Simulation results show the effectiveness of our proposal under various use case scenarios by means of eliminating the packet loss rate and improving QoE of the museum users.
The Performance Measurement of the 60GHz mmWave Module for IoRL Network
As one of the key features in 5G network, Millimeter wave (mmWave) technology can provide the ultra-wide bandwidth to support higher data rate. However, for high frequency band, mmWave signal still suffers from the high pathloss, the multipath fading and the signal blockage issue, especially in the indoor environment. For different application scenarios, the channel conditions and quality of services (QoS) are quite different. Therefore, it is essential to investigate the impact of the mmWave channel on the system performance. This paper investigates and measures the performance of a 60GHz mmWave module that is exploited for the downlink and uplink high data rate transmission in the Internet of Radio-Light (IoRL) project. The coverage area and the throughput of the mmWave module is estimated by measuring the error vector magnitude (EVM) of received signals with different transmitter (TX) and receiver (RX) angles and at different locations in a laboratory. In this paper, the measurement environment and system setup are introduced. After that, the waveform design for the measurement is also discussed. The measurement results show that this 60GHz mmWave module can provide an acceptable performance only in some cases, which restricts its application scenarios.
Realising a new generation of 5G VR systems through Internet of Radio Light
Virtual Reality (VR) systems are currently limited in either processing power, portability or functionality. 5G networking, with super high data rates and ultra-low latency, is expected to revolutionise much of what we do, notably transforming VR experiences. The Internet of Radio Light (IoRL) project presents a 5G architecture that could further enhance VR experiences by bridging gaps between various VR technologies and reducing current restrictions. This could enable a single IoRL VR system, capable of combining the significant processing performance of PC operated VR systems with similar physical freedoms offered by standalone VR headsets, as well as delivering equally impressive VR experiences to mobile users. Most notably, the IoRL project combines both Visible Light Communication (VLC) and mmWave technology to produce an Indoor Positioning System (IPS) which, as presented in earlier works, poses an opportunity for a novel VR tracking method. This paper explores the possibilities of an IoRL VR system and proposes a model and solution to evaluate the concept validity. The obtained results reflect that while this system is effective for 5G wireless localisation, further work is required to meet VR requirements.
A scaleable and license free 5G internet of radio light architecture for services in train stations
In this paper we present a 5G Internet Radio- Light (IoRL) architecture for underground train stations that can be readily deployed because it utilizes unlicensed visible light and millimeter wave part of the spectrum, which does not require Mobile Network Operator (MNO) permission to deploy and which is used to provide travelers with accurate location, interaction, access to Internet and Cloud based Services, such as high resolution video on a Tablet PC. The paper describes the train station use cases and the IoRL architecture.
Simulation and Performance Analysis of Software-Based Mobile Core Network Architecture (SBMCNA) Using OMNeT++
Software defined Networking (SDN) represent the future framework for the mobile networks. This paper discusses the required modifications within the EPC in order to overcome some of the limitations of the current EPS, these modifications include introducing an SDN based solution Software Based Mobile Core Network Architecture (SBMCNA), we also show that Openflow protocol 1.3 has been extended to develop two methods to support GPRS Tunnelling Protocol (GTP) operations. The use of an intelligent Forwarding device (FD) were proposed to reduce the signalling load. SBMCNA were built on OMNeT++ by extending simuL TE [12] and Openflow 1.3 [13] modules. Load balancing and resiliency were used to demonstrate the capability of the proposed system to reduce the signalling load. The preliminary results of the system performance are presented.
A Scalable and License Free 5G Internet of Radio Light Architecture for Services in Homes & Businesses
In this paper we present a 5G Internet Radio-Light (IoRL) architecture for homes that can be readily deployed because it utilizes unlicensed visible light and millimeter wave part of the spectrum, which does not require Mobile Network Operator (MNO) permission to deploy and which is used to provide inhabitants of houses with accurate location, interaction, access to Internet and Cloud based services such as high resolution video on a Tablet PC. The paper describes the home use cases and the IoRL architecture.
Software Defined Selective Traffic Offloading (SDSTO)
This paper presents Software Defined Selective Traffic Offloading (SDSTO) Solution. The solution uses layer 2 based backhaul network with a distributed cloud-based architecture distributed in near proximity to the mobile access network. SDSTO leverages SDN features to redirect user traffic to/from the cloud. SDSTO is modeled and simulated in OMNeT ++ and the preliminary results show improvement in the system performance in terms of end to end delay and handover time.
5G Internet of radio light services for supermarkets
In this paper we present a 5G Internet Radio-Light (IoRL) architecture for supermarkets that can be readily deployed because it utilizes unlicensed visible light and millimeter wave part of the spectrum and which is used to provide shoppers with accurate location, interaction, access to Internet and Cloud based services such as high resolution video on a Tablet PC. The paper describes the supermarket use cases, the user and functional requirements and the IoRL architecture.
IoRL Indoor Location Based Data Access, Indoor Location Monitoring & Guiding and Interaction Applications
We target the problem of providing 5G network connectivity in rural zones by means of Base Stations (BSS) carried by Unmanned Aerial Vehicles (UAVs). Our goal is to schedule the UAVs missions to: i) limit the amount of energy consumed by each UAV, ii) ensure the coverage of selected zones over the territory, ii) decide where and when each UAV has to be recharged in a ground site, iii) deal with the amount of energy provided by Solar Panels (SPs) and batteries installed in each ground site. We then formulate the RURALPLAN optimization problem, a variant of the unsplittable multicommodity flow problem defined on a multiperiod graph. After detailing the objective function and the constraints, we solve RURALPLAN in a realistic scenario. Results show that RURALPLAN is able to outperform a solution ensuring coverage but not considering the energy management of the UAVs.
Energy Consumption for Training and Inference of Machine Learning Models and Their Processes
While the study of energy consumption in the field of computer architecture remains widely studied, it has received less attention in the field of machine learning and artificial intelligence. Artificial Intelligence and Machine Learning models are widely utilised in various applications including data science, computer vision and natural language processing. Despite being a highly incentivised and sought-after field, most of its research is concentrated on the size of the models, amount of data and accuracies without concern for computational constraints such as power and energy consumption. This partially stems from a limited availability of energy evaluating tools in machine learning and lack of support from frameworks and cloud providers, largely due to security concerns. This research evaluates energy consumption and carbon emissions of several machine learning and deep learning models in various use cases. This study uses existing energy-measuring tools to provide insights into sustainable choices of models for lightweight applications.
Python Unplugged: A Comparative Study of Seven Energy-Efficient Coding Techniques
This research focuses on the impact of Python code optimisation techniques on energy usage and performance, with the goal of promoting sustainable software development. Given the growing worldwide emphasis on minimising energy consumption, it examines seven alternative programming techniques to identify the most energy-efficient practices. Through extensive experimentation, it was found that built-in functions, lazy evaluation, and caching are some of the leading solutions for optimising energy usage and performance in Python programming. The research revealed substantial variations in energy efficiency and performance by conducting experiments to evaluate the effectiveness of these techniques, providing essential insights for software developers. The study not only sets the foundation for future studies in energy-efficient Python programming, but it also paves the way for new methods, such as training an artificial intelligence (AI) model to predict the energy footprint of software programs before executing them.
SHA256: Comparing the Energy Consumption of Different Implementations
Adapting energy-efficient approaches is essential in order to save resources, ensuring the highest throughput. In addition, combining security-enabled approaches has also been a concern to protect data, as it has now been at the most risk by the introduction of cloud-based approaches. Thus, it is important to create a bridge between these concerns and ensure more research on security techniques that are also energy efficient. For that purpose, in this chapter, one of the data security techniques, such as a hashing algorithm, has been chosen, specifically the SHA256 algorithm in order to analyze the power consumption of the implementations. SHA256 hashing algorithm has been implemented in the Visual Studio IDE in Python using four libraries named Hashlib, CryptoHash, cryptography, and PyNaCl, and the power consumption has been calculated using the Intel Power Gadget software. After analyzing the collected data and applying descriptive analysis, a t-test and an ANOVA test, a detailed discussion has been formed which states that the PyNaCl library consumes the least amount of power per second and also emits the least amount of CO2. Therefore, the chapter recommends using the PyNaCl library to implement the SHA256 algorithm in order to ensure least energy consumption, and also indicates that it is possible to find better approaches while implementing an algorithm when the concern is energy efficiency.
Comparative Analysis of Energy Efficiency in Virtualization Tools and Underlying Operating Systems
According to current research into trends in information technology and its impact on the global economy, it has been realized that the upward rise and adoption of digital technologies continue to contribute to the use of cloud computing technologies, with more concern placed on the energy consumed by these cloud infrastructures. In this paper, as much as the global concern is the energy consumption of different worldwide systems, the focus is directed to the energy consumption of cloud computing infrastructures. Hence, we look into the energy consumption levels of several chosen virtualization technologies and their underlying operating systems in the context of cloud computing, with a close look into their greenhouse gas (GHG) emissions. This work examines how cloud computing components affect energy usage in the global ICT ecosystem. The methodology used in the work was divided into two categories: The macro methodology that emphasized life-cycle analysis, and the micro methodology which used both experimental setup and inferential statistics to confirm details of the result. The findings of the study showed that the Microsoft Hyper-V consumed the least energy, and it is expected that this finding will improve cloud computing practitioners’ and policymakers’ understanding of virtualization tools’ energy consumption patterns alongside GHG emissions, helping them make sustainable environmental decisions.
Locality-Aware QoS Optimization for Microservices Scheduling in Kubernetes Cluster
Microservices and Kubernetes are increasingly adopted for building and deploying large-scale distributed software systems in cloud computing environment. A microservice architecture divides an application into smaller, loosely coupled microservices, each of which can be deployed and scaled independently in Kubernetes cluster. While this flexibility allows for dynamic scaling of microservice instances to meet user demands, but the complex dependencies among these microservices can pose challenges in effectively managing microservices application performance and their impact on the Quality of Service (QoS) during on-demand instances scheduling. In this paper, we propose LOCUS, a locality aware QoS optimizer for scheduling microservices in Kubernetes cluster. The locus-optimizer is designed on top of observability ecosystem that leverage real-time performance metrics data to optimize node selection process in default Kubernetes scheduling framework. Further, the locality awareness implicitly favors the microservices dependencies without creating hard-rules based scheduling process. We evaluate our approach on a large scale microservices workload. The results confirm that in contrast to the default scheduling mechanism, the locus-optimizer achieves better microservices QoS.
A Policy-Driven Approach for Securing Microservices Workflow in Kubernetes Cluster
Kubernetes based microservices deployments are increasingly adopted in modern cloud computing environments. However, securing microservices in Kubernetes cluster is critical because they operate in distributed manner across different nodes and each of which gives a potential entry point for attackers due to weak authentication mechanism or misconfiguration of security policies. To address these issues, we propose two layered policy-driven security mechanism for securing microservices workflow in Kubernetes cluster. The first layer employs mutual TLS (mTLS) for secure service-To-service authentication using the Istio service mesh. The second layer introduces workflow-based authorization policy enforced through Open Policy Agent (OPA) Gatekeeper. Experimental results demonstrate that unauthorized access is effectively blocked at the entry point, confirming that the proposed approach establishes a robust, multi-layered security environment for Kubernetes-based microservices.
In order to accommodate latency-sensitive IoT and AI workloads, serverless computing is becoming more popular in edge environment. However, the default Kubernetes scheduler ignores the energy and performance limitations of edge nodes and is resource-agnostic. Prior approaches usually only optimized for latency or energy, ignoring the combined effects of cold-start dynamics, inter-node communication, and inter-service dependencies. In this work, we propose a lightweight heuristic scheduling approach that combines inter-service traffic, energy, and latency into a single cost function. This approach, implemented as a custom Kubernetes Scheduling Framework plugin, has low overhead and is used in conjunction with a descheduler that consolidates workloads by draining underutilized nodes. Short-term responsive placements and long-term energy efficiency are made possible by this combination. We test the system on a Raspberry Pi cluster, using Knative workloads that are typical of IoT analytics workflows. The average latency decreased by 29%, failure rates decreased by 74%, and energy consumption per request reduced by 32%, all of which are consistent improvements over the default scheduler. These results show that multi-objective, metrics-aware placement can significantly improve serverless edge platforms’ quality of service objectives and energy efficiency, specifically when combined with descheduling for consolidation.
Current teaching
- ICT and Environment, Green Computing Technologies, Responsibly Green (MSc)
- Smart Systems (MSc)
- Production Projects (support) (year 3)
- Team projects (support) (year 2)
- Computer Communications (year 1)
- Computing systems (year 1)
{"nodes": [{"id": "25126","name": "Dr Nawar Jawad","jobtitle": "Senior Lecturer","profileimage": "/-/media/images/staff/default.jpg","profilelink": "/staff/dr-nawar-jawad/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "36","numberofcollaborations": "36"},{"id": "29159","name": "Dr Satish Kumar","jobtitle": "Lecturer","profileimage": "/-/media/images/staff/dr-satish-kumar.jpg","profilelink": "/staff/dr-satish-kumar/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "10","numberofcollaborations": "3"},{"id": "6513","name": "Professor Ah-Lian Kor","jobtitle": "Professor","profileimage": "/-/media/images/staff/lbu-approved/beec/ah-lian-kor.jpg","profilelink": "/staff/professor-ah-lian-kor/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "145","numberofcollaborations": "1"}],"links": [{"source": "25126","target": "29159"},{"source": "25126","target": "6513"}]}
Dr Nawar Jawad
25126