Leeds Beckett University - City Campus,
Woodhouse Lane,
LS1 3HE
Dr Sidhu Selvarajan
Lecturer
Dr Shitharth completed his PhD in the Department of Computer Science & Engineering, Anna University. He completed his Postdoc at The University of Essex, Colchester, UK. He has worked in various institutions and has seven years of teaching experience.
About
Dr Shitharth completed his PhD in the Department of Computer Science & Engineering, Anna University. He completed his Postdoc at The University of Essex, Colchester, UK. He has worked in various institutions and has seven years of teaching experience.
Dr Shitharth completed his PhD in the Department of Computer Science & Engineering, Anna University. He completed his Postdoc at The University of Essex, Colchester, UK. He has worked in various institutions and has seven years of teaching experience.
Dr Shitharth working as a lecturer in cyber security at Leeds Beckett University, Leeds, UK. He has published in more than 100 International Journals and 20 International & National conferences. He has published four patents in IPR. He is also an active member of IEEE Computer Society and five more professional bodies. He is also a member of the International Blockchain organization. He is a certified hyperledger expert and certified blockchain developer. His current research interests include Cyber Security, Blockchain, Critical Infrastructure & Systems, Network Security & Ethical Hacking. He is an active researcher, reviewer and editor for many international journals.
Related links
Research interests
Dr Shitharth did his doctorate in the security of SCADA network Security- A critical Infrastructure system. His works have been published by reputed publishers like IEEE, ACM, Elsevier, Springer, IET, Polytechnica, Wiley, Bentham Science, Taylor & Francis, MDPI, Hindawi etc., He has also presented his research findings at highly rated IEEE conferences. He has been involved in multiple funded projects from different nations including, Ministry of Education - Ethiopia and the MoE - Saudi Arabia. He has four published patents. On the whole, he is a young passionate driven researcher full of aspirations.
Ask Me About
Publications (216)
Sort By:
Featured First:
Search:
Computer Modeling Approaches for Blockchain-Driven Supply Chain Intelligence: A Review on Enhancing Transparency, Security, and Efficiency
Blockchain Technology (BT) has emerged as a transformative solution for improving the efficacy, security, and transparency of supply chain intelligence. Traditional Supply Chain Management (SCM) systems frequently have problems such as data silos, a lack of visibility in real time, fraudulent activities, and inefficiencies in tracking and traceability. Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues; it facilitates trust, security, and the sharing of data in real-time among all parties involved. Through an examination of critical technologies, methodology, and applications, this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence. The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field. As part of the process, we delved through the research on blockchain-based supply chain models, smart contracts, Decentralized Applications (DApps), and how they connect to other cutting-edge innovations like Artificial Intelligence (AI) and the Internet of Things (IoT). To quantify blockchain’s performance, the study introduces analytical models for efficiency improvement (η), security enhancement (δ), and scalability (S
Quantum-Driven Reinforcement Learning for Spectral Energy Optimization in Massive MIMO Hybrid Beamforming for 6G
The evolution of 6G wireless networks demands highly efficient beamforming strategies to optimize spectral and energy efficiency in massive MIMO systems. This study introduces a Quantum-Driven Reinforcement Learning (QDRL) framework for Spectral Energy Optimization in Massive MIMO Hybrid Beamforming for 6G, leveraging Quantum Deep Q-Networks (Q-DQN), Quantum Policy Gradient (QPG), and Quantum Approximate Optimization Algorithm (QAOA). The framework integrates mruby-based lightweight scripting for efficient deployment in edge-AI environments, enhancing computational flexibility and resource efficiency. Performance evaluations demonstrate that the Hybrid Quantum Model achieves 11.21 bps/Hz spectral efficiency, 97% resource utilization efficiency, and reduces energy consumption to 0.50 Joules/bit, outperforming classical models. The Bit Error Rate (BER) is minimized to 0.0025, and the convergence time is 48.7 s, significantly improving computational efficiency. Comparative analysis with conventional Deep Reinforcement Learning (DRL) techniques shows that the proposed quantum-enhanced model provides a 32% improvement in energy efficiency and a 21% reduction in computational complexity. The integration of mruby enhances the adaptability of the system in low-power and embedded environments, making it a viable solution for real-time 6G hybrid beamforming. This research highlights the transformative potential of quantum-assisted AI frameworks for scalable, high-speed, and energy-efficient wireless communication.
Secured transmission approach for transportation units with twin creations and artificial intelligence algorithm
In this paper, the effective transportation units that provides secured data transmission using a communication protocol is examined. Since the development phase of vehicle connectivity is present with low reliability the proposed method examines the security of each transportation unit by using twin representations. Moreover to analyze exact characteristics of transportation units similarity index is measured by using artificial intelligence algorithm thereby individual node connections are represented with Internet of Things (IoT). An analytical representation with dynamic transportation units are provided with different parameters where at minimized energy rates it is possible to transmit the data for included vehicular systems. The outcome of projected model is examined in terms of attacks, trust values, interference and congestion and a comparison is Made with existing approach. From the comparison outcome the proposed method proves to provided improved security with reduced congestion rate of 4% for each transmitted packets. In addition, the comparison results proves that with reduced interference of 0.003% effective data transmission can be utilized for all connected transportation units.
Enhanced IoT threat detection using Graph-Regularized neural networks optimized by Sea-Lion algorithm
The Internet of Things (IoT) has revolutionized business operations, but its interconnected nature introduces significant cyber security risks, including malware and software piracy that compromise sensitive data and organizational reputation. To address this challenge, we propose IoT Threat Detection using Graph-Regularized Neural Networks Method. Our framework utilizes the Google Code Jam Dataset, which is pre-processed using Edge-Aware Smoothing Sharpening Filtering (EASSF) to enhance data quality, and feature extraction is performed using the General Synchro extracting Chirplet Transform (GSCT). The AGRNN classifier is then employed to distinguish between benign and malicious threats, with optimization performed using the Sea-lion Optimization Algorithm. Our approach demonstrates exceptional performance, achieving accuracy improvements of up to 29.60% over existing methods. Specifically, the proposed work achieves 29.60%, 18%, and 14.7% higher accuracy for detecting benign threats, and 20.1%, 27.6%, and 13.2% higher accuracy for malicious threats compared to existing methods. Furthermore, our approach achieves 17.9%, 26.1%, and 13% higher F-measure for benign threats, and 16.7%, 35.6%, and 17% higher F-measure for malicious threats. The ROC analysis also confirms the effectiveness of our approach, with 29.02%, 18.0%, and 14.7% higher ROC values compared to existing methods. These results confirm the effectiveness of our approach in detecting cyber security threats in IoT systems, providing a robust solution for safeguarding sensitive data and protecting organizational reputation. By enhancing IoT security, our proposed work offers a promising approach for organizations seeking to bolster their cyber security defences.
Enriching Image Security in Healthcare Based on Cryptography and Deep Learning Techniques
Securing electronic health records in the internet of medical things is a key interest in health-care due to the sector's different surroundings. As technology evolves, preserving the privacy, reliability, and accessibility of healthcare data becomes extremely challenging. Cryptographic techniques provide a possible solution for safeguarding confidential information about medical pictures while it is being transferred and stored. On the other hand, deep leaning has the potential to totally change cryptography by providing strong encryption, quality improvements, and detection potential for healthcare picture security. To increase the privacy of healthcare image information, this research analysed the fusion of deep leaning and cryptography methods. It gives a study of the current situation of deep learning-based image detection of anomalies approaches in working contexts, such as network typologies, supervision levels, along with evaluation norms. This study provides direction to future research techniques to overcome these problems, also the possibility and challenges of medical picture cryptography and picture detection of abnormality. This work bridges the gap between deep learning and encryption, paving the way for better privacy, integrity, and availability of key image data.
The advent of smart healthcare technology has provided various benefits, including remote patient monitoring, personalized treatments, and early disease detection. However, transmitting sensitive patient data through IoMT devices raises significant security and privacy concerns. To protect patients’ data and smart medical devices, authentication is required. We uncovered security flaws in the current healthcare architecture, including impersonation, stolen verifiers, and man-in-the-middle attacks. This encouraged us to propose a security architecture that protects patients’ data from attacks and guarantees the security of sensitive healthcare information. Our proposed security scheme, ฿-ED-CRY (Bi Encryption Decryption Crypto), is based on Elliptic Curve Cryptography (ECC). It protects patients’ sensitive healthcare information by enhancing privacy-preserving mechanisms and validates the effectiveness of our scheme. We implemented the proposed scheme using GCC 4.9.5 and the pairing-based cryptography (PBC) library. The results demonstrate that our scheme performs better than the existing security schemes in terms of privacy and computational efficiency. The computation cost is around 13.3640 percentage lower than other related schemes. This research explores the revolutionary effects of integrated healthcare systems, with an emphasis on the convergence of digital health technologies and patient treatment in real-world IoMT scenarios.
Enhancing Military Visual Communication in Harsh Environments Using Computer Vision Techniques
This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities. A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images. Furthermore, the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations. Thus, it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors. The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values. This is achieved by visualizing a graphical function. Moreover, to derive valuable insights from a series of photos, both the separation and in-version processes are conducted. This involves analyzing comparison results across four different scenarios. The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%, respectively. In contrast, the existing strategy exhibits higher complexities of 3 s and 9.1%, respectively.
Heart disease prediction with a feature-sensitized interpretable framework for the Internet of Medical Things sensors
Cardiovascular health is increasingly at risk due to modern lifestyle factors such as obesity, smoking, stress, hypertension, and sedentary behavior. Post-pandemic health practices and medication side effects have further contributed to rising cases of early heart failure, particularly among individuals aged 25–40 years. This highlights the need for an automated and interpretable framework to predict heart disease at an early stage. In this study, body vitals acquired from a secondary dataset. Machine learning models including Support Vector Machine, Random Forest, Decision Tree, and Logistic Regression were employed for classification. Model performance was evaluated using accuracy, F1-score, and k-fold cross-validation. Among the tested models, the Random Forest classifier demonstrated superior performance with an accuracy and F1-score of 0.955. The interpretability is enhanced with model predictions were explained using Local Interpretable Model-Agnostic Explanations (LIME) for local surrogates and SHAP values for global surrogates. SHAP decision plots provided clear insights into classification behaviour and feature contributions. The proposed interpretable machine learning framework successfully predicts heart disease with high accuracy while maintaining transparency in decision-making. With the integration of sensor data with cloud-based analysis and explainable AI techniques, this study contributes to reducing the incidence of early heart failures and supports more reliable decision-making in healthcare applications.Introduction
Methods
Results
Discussion/Conclusion
This work presents an enhanced identification procedure utilising bioinformatics data, employing optimisation techniques to tackle crucial difficulties in healthcare operations. A system model is designed to tackle essential difficulties by analysing major contributions, including risk factors, data integration and interpretation, error rates and data wastage and gain. Furthermore, all essential aspects are integrated with deep learning optimisation, encompassing data normalisation and hybrid learning methodologies to efficiently manage large-scale data, resulting in personalised healthcare solutions. The implementation of the suggested technology in real time addresses the significant disparity between data-driven and healthcare applications, hence facilitating the seamless integration of genetic insights. The contributions are illustrated in real time, and the results are presented through simulation experiments encompassing 4 scenarios and 2 case studies. Consequently, the comparison research reveals that the efficacy of bioinformatics for enhancing routes stands at 7%, while complexity diminish to 1%, thereby indicating that healthcare operations can be transformed by computational biology.
The ever-evolving domain of machine learning has witnessed significant advancements with the advent of federated learning, a paradigm revered for its capacity to facilitate model training on decentralized data sources while upholding data confidentiality. This research introduces a federated learning-based framework designed to address gaps in existing smoking prediction models, which often compromise privacy and lack data generalizability. By utilizing a distributed approach, the framework ensures secure, privacy-preserved model training on decentralized devices, enabling the capture of diverse smoking behavior patterns. The proposed framework incorporates careful data preprocessing, rational model architecture selection, and optimal parameter tuning to predict smoking with high precision. The results demonstrate the efficacy of the model, achieving an accuracy rate of 97.65%, complemented by an F1-score of 97.41%, precision of 97.31%, and recall rate of 97.36%, significantly outperforming traditional approaches. This research also discusses the benefits of federated learning, including efficient time management, parallel processing, secure model updates, and enhanced data privacy, while addressing limitations such as computational overhead. These findings underscore the transformative potential of federated learning in healthcare, paving the way for future advancements in privacy-preserved predictive modeling.
Big Data and Blockchain Technology for Secure IoT Applications
Big Data and Blockchain Technology for Secure IoT Applications presents a comprehensive exploration of the intersection between two transformative technologies: big data and blockchain, and their integration into securing Internet of Things (IoT) applications. As the IoT landscape continues to expand rapidly, the need for robust security measures becomes paramount to safeguard sensitive data and ensure the integrity of connected devices. This book delves into the synergistic potential of leveraging big data analytics and blockchain’s decentralized ledger system to fortify IoT ecosystems against various cyber threats, ranging from data breaches to unauthorized access. Within this groundbreaking text, readers will uncover the foundational principles underpinning big data analytics and blockchain technology, along with their respective roles in enhancing IoT security. Through insightful case studies and practical examples, this book illustrates how organizations across diverse industries can harness the power of these technologies to mitigate risks and bolster trust in IoT deployments. From real-time monitoring and anomaly detection to immutable data storage and tamper-proof transactions, the integration of big data and blockchain offers a robust framework for establishing secure, transparent, and scalable IoT infrastructures. Furthermore, this book serves as a valuable resource for researchers, practitioners, and policymakers seeking to navigate the complexities of IoT security. By bridging the gap between theory and application, this book equips readers with the knowledge and tools necessary to navigate the evolving landscape of interconnected devices while safeguarding against emerging cyber threats. With contributions from leading experts in the field, it offers a forward-thinking perspective on harnessing the transformative potential of big data and blockchain to realize the full promise of the IoT securely.
In this work, a new kind of charge scheduling algorithm is proposed by utilizing the War Strategy Optimization (WSO) algorithm. The strategies used in the war such as attack, defense, assigning soldiers to take positions are the inspiration to this algorithm. The proposed WSO algorithm is validated in a constructed geographic area which consists of Six starting/destination points, sixteen nodes, and twelve charging stations. In terms of waiting time and charging cost, the experimental results show that the WSO method much improves over current methods. The average waiting time and average charging cost of EVs are validated in MATLAB, with different considerations such as different number of EVs varied from 25 to 100, and different number of charging piles varied from 1 to 4. The WSO algorithm specifically lowered charging costs by up to 13.67% compared to the same and waiting time by up to 83.25% relative to the First Come First Serve algorithm. Comparatively to the Chaotic Harris Hawk Optimization and Harris Hawk Optimization algorithms, the WSO method demonstrated declines in waiting time by 11.17% and 39.09%, respectively, and declines in charging costs by 3.61% and 12.45%, respectively. Especially in situations with limited charging infrastructure, these findings show that the WSO algorithm may improve the efficiency and cost-effectiveness of EV charging management systems. For real-world EV charging management systems, the method's capacity to efficiently allocate EVs among charging stations, lower waiting times, and lower charging costs makes it a potential solution.
The need for secured data transmission devices is growing in current generation networking meadows. It is very important to process all the transmitted data in a confidential way and maintain integrity by means of congesting other unauthorized users from entering the internal system. However some of the secured devices that are already present in the market cannot be trusted for a long period of time as non-repudiation factors are much higher. As such, in a data processing technique, a device needs to be verified in a complete manner before trusting it. Hence, the proposed method provides possible solutions of verifying a network device before transmitting data. In order to verify the data without difficulty, blockchain procedures are incorporated where no large segments of data are transmitted as fragmentation of data is being processed in the many circumstances. Moreover, the zero trust devices are verified for more time periods, and only if appropriate data processing routes are captured, only the devices are then allowed to transmit. To test further the accuracy of zero trust devices, real-time data outcomes are analyzed under five different scenarios and even congestion is highly reduced in the projected model. Thus, in the comparison case with existing method, the proposed outcome which is enacted with an analytical model proves to be much effective at more than 78%.
Rapid industrialization has fueled the need for effective optimization solutions, which has led to the widespread use of meta-heuristic algorithms. Among the repertoire of over 600, over 300 new methodologies have been developed in the last ten years. This increase highlights the need for a sophisticated grasp of these novel methods. The use of biological and natural phenomena to inform meta-heuristic optimization strategies has seen a paradigm shift in recent years. The observed trend indicates an increasing acknowledgement of the effectiveness of bio-inspired methodologies in tackling intricate engineering problems, providing solutions that exhibit rapid convergence rates and unmatched fitness scores. This study thoroughly examines the latest advancements in bio-inspired optimisation techniques. This work investigates each method’s unique characteristics, optimization properties, and operational paradigms to determine how revolutionary these approaches could be for problem-solving paradigms. Additionally, extensive comparative analyses against conventional benchmarks, such as metrics such as search history, trajectory plots, and fitness functions, are conducted to elucidate the superiority of these new approaches. Our findings demonstrate the revolutionary potential of bio-inspired optimizers and provide new directions for future research to refine and expand upon these intriguing methodologies. Our survey could be a lighthouse, guiding scientists towards innovative solutions rooted in various natural mechanisms.
An intelligent emotion prediction system using improved sand cat optimization technique based on EEG signals
Emotion recognition and prediction plays a vital role in human-computer interaction (HCI), offering more potential for efficient intuitive and adaptive systems. This presents an innovative and efficient approach for emotion prediction from electroencephalogram (EEG) signals by using an Improved Sand Cat Optimization (ISCO) technique to enhance prediction accuracy and efficiency. EEG signals directly indicates the brain activity and these signals are rich and reliable source of data for capturing emotional states. The proposed method is improved by adapting the Cat movement which uses convex lens opposition based learning technique and this will enhance the Cat movement towards target. The proposed method converges to target identification quickly for achieving efficient emotion prediction by extending the exiting Sand Cat Optimization algorithm. The algorithm has been evaluated by using openly available EEG signals dataset, which contains 2132 labelled records of three categories of emotional classes. The performance of the proposed method is compared with other nature inspired optimization algorithms such as Practical Swam Optimization (PSO), Artificial Rabbit Optimization (ARO), Artificial Bee Colony Optimization (ABCO), and Cat Optimization (CO) algorithm. The experimental evaluation shows that the proposed technique outperforms and showcases significant improvements in emotion prediction with accuracy of 97.5% compared to the other bioinspired optimization techniques. This research article has a scope to contribute to the advancement of emotion prediction system in the field of mental health care monitoring, HCI systems, gaming systems, and affective computing.
An African vulture optimization algorithm based energy efficient clustering scheme in wireless sensor networks
Energy efficiency plays a major role in sustaining lifespan and stability of the network, being one of most critical factors in wireless sensor networks (WSNs). To overcome the problem of energy depletion in WSN, this paper proposes a new Energy Efficient Clustering Scheme named African Vulture Optimization Algorithm based EECS (AVOACS) using AVOA. The proposed AVOACS method improves clustering by including four critical terms: communication mode decider, distance of sink and nodes, residual energy and intra-cluster distance. Through mimicking the natural scavenging behavior of African vultures, AVOACS continuously balances energy consumption on nodes resulting in an increase in network stability and lifetime. For CH selection, we use AVOACS, which considers the following parameters: communication mode decider, the distance between sink and node, residual energy, and intra-cluster distance. In comparison to the OE2-LB protocol, simulation findings demonstrate that AVOACS enhances stability, network lifetime, and throughput by 21.5%, 31.4%, and 16.9%, respectively. The results show that AVOACS is an effective clustering algorithm for energy-efficient operation in heterogeneous WSN environments as it contributes to a large increase of network lifetime and significant enhancement of performance.
Development of edge computing and classification using The Internet of Things with incremental learning for object detection
The edge computing method and Internet of Things (IoT), which offers significantly shorter inactivity intervals, is one of the promising network technologies in today's generation of systems. There is no need to process the data using a cloud platform whenever an edge computing technology is used; alternative ways employing offline IoT and incremental learning techniques can be used. Using IoT, the incremental learning process transfers all essential data within a specific device. Thus, edge computing, IoT and incremental learning techniques are combined in the proposed method to detect numerous objects with varying weights. An analytical model that minimizes the parametric values and has various objectives is used to carry out the object detection process. Additionally, by utilizing evaluation metrics from five different case studies that were simulated using the MATLAB computing toolkit, the proposed method was tested. The efficacy of the proposed method rises to 62% when the simulated results are compared with the current method. The suggested method can accurately identify several objects in real-time when operating in a multi-object mode.
Author Correction: A quantum trust and consultative transaction-based blockchain cybersecurity model for healthcare systems
Correction to: Scientific Reports, published online 02 May 2023 The original version of this Article contained an error in the name of the author Haralambos Mouratidis, which was incorrectly given as Haris Mouratidis. The original Article has been corrected.
An artificial intelligence lightweight blockchain security model for security and privacy in IIoT systems
The Industrial Internet of Things (IIoT) promises to deliver innovative business models across multiple domains by providing ubiquitous connectivity, intelligent data, predictive analytics, and decision-making systems for improved market performance. However, traditional IIoT architectures are highly susceptible to many security vulnerabilities and network intrusions, which bring challenges such as lack of privacy, integrity, trust, and centralization. This research aims to implement an Artificial Intelligence-based Lightweight Blockchain Security Model (AILBSM) to ensure privacy and security of IIoT systems. This novel model is meant to address issues that can occur with security and privacy when dealing with Cloud-based IIoT systems that handle data in the Cloud or on the Edge of Networks (on-device). The novel contribution of this paper is that it combines the advantages of both lightweight blockchain and Convivial Optimized Sprinter Neural Network (COSNN) based AI mechanisms with simplified and improved security operations. Here, the significant impact of attacks is reduced by transforming features into encoded data using an Authentic Intrinsic Analysis (AIA) model. Extensive experiments are conducted to validate this system using various attack datasets. In addition, the results of privacy protection and AI mechanisms are evaluated separately and compared using various indicators. By using the proposed AILBSM framework, the execution time is minimized to 0.6 seconds, the overall classification accuracy is improved to 99.8%, and detection performance is increased to 99.7%. Due to the inclusion of auto-encoder based transformation and blockchain authentication, the anomaly detection performance of the proposed model is highly improved, when compared to other techniques.
Disaster management ontology- an ontological approach to disaster management automation
The geographical location of any region, as well as large-scale environmental changes caused by a variety of factors, invite a wide range of disasters. Floods, droughts, earthquakes, cyclones, landslides, tornadoes, and cloudbursts are all common natural disasters that destroy property and kill people. On average, 0.1% of the total deaths globally in the past decade have been due to natural disasters. The National Disaster Management Authority (NDMA), a branch of the Ministry of Home Affairs, plays an important role in disaster management in India by taking responsibility for risk mitigation, response, and recovery from all natural and man-made disasters. This article presents an ontology-based disaster management framework based on the NDMA’s responsibility matrix. This ontological base framework is named as Disaster Management Ontology (DMO). It aids in task distribution among necessary authorities at various stages of a disaster, as well as a knowledge-driven decision support system for financial assistance to victims. In the proposed DMO, ontology has been used to integrate knowledge as well as a working platform for reasoners, and the Decision Support System (DSS) ruleset is written in Semantic Web Rule Language (SWRL), which is based on the First Order Logic (FOL) concept. In addition, OntoGraph, a class view of taxonomy, is used to make taxonomy more interactive for users.
A quantum trust and consultative transaction-based blockchain cybersecurity model for healthcare systems
Many researchers have been interested in healthcare cybersecurity for a long time since it can improve the security of patient and health record data. As a result, a lot of research is done in the field of cybersecurity that focuses on the safe exchange of health data between patients and the medical setting. It still has issues with high computational complexity, increased time consumption, and cost complexity, all of which have an impact on the effectiveness and performance of the complete security system. Hence this work proposes a technique called Consultative Transaction Key Generation and Management (CTKGM) to enable secure data sharing in healthcare systems. It generates a unique key pair based on random values with multiplicative operations and time stamps. The patient data is then safely stored in discrete blocks of hash values using the blockchain methodology. The Quantum Trust Reconciliation Agreement Model (QTRAM), which calculates the trust score based on the feedback data, ensures reliable and secure data transfer. By allowing safe communication between patients and the healthcare system based on feedback analysis and trust value, the proposed framework makes a novel contribution to the field. Additionally, during communication, the Tuna Swarm Optimization (TSO) method is employed to validate nonce verification messages. Nonce message verification is a part of QTRAM that helps verify the users during transmission. The effectiveness of the suggested scheme has been demonstrated by comparing the obtained findings with other current state-of-the-art models after a variety of evaluation metrics have been analyzed to test the performance of this security model.
IoT based arrhythmia classification using the enhanced hunt optimization-based deep learning
The advancement of information technology, the Internet of Things (IoT), and several miniaturize equipment's enhances the healthcare field that provides real-time patient monitoring, which helps to provide medication anywhere and anytime. However, accurate detection is still a challenging task for which an effective classification model is introduced in this research. The proposed method is the Enhanced Hunt optimization based Deep convolutional neural network (Enhanced Hunt based-Deep CNN), in which the Enhanced Hunt optimization algorithm (EHOA) is developed by fusing the hunting habit of the predator and the herding characteristics of herding dog for enhancing the global optimal convergence. Here, the ECG signal from the individuals is collected using the IoT network and stored in the Hospital server, which is accessed by the doctor when requested, the classification is performed using the Enhanced Hunt based-Deep CNN and the performance revealed the effectiveness with the accuracy, sensitivity, and specificity of 95.33%, 94.92%, and 97.57%.
Towards improving e-commerce customer review analysis for sentiment detection
According to a report published by Business Wire, the market value of e-commerce reached US$ 13 trillion and is expected to reach US$ 55.6 trillion by 2027. In this rapidly growing market, product and service reviews can influence our purchasing decisions. It is challenging to manually evaluate reviews to make decisions and examine business models. However, users can examine and automate this process with Natural Language Processing (NLP). NLP is a well-known technique for evaluating and extracting information from written or audible texts. NLP research investigates the social architecture of societies. This article analyses the Amazon dataset using various combinations of voice components and deep learning. The suggested module focuses on identifying sentences as ‘Positive‘, ‘Neutral‘, ‘Negative‘, or ‘Indifferent‘. It analyses the data and labels the ‘better’ and ‘worse’ assumptions as positive and negative, respectively. With the expansion of the internet and e-commerce websites over the past decade, consumers now have a vast selection of products within the same domain, and NLP plays a vital part in classifying products based on evaluations. It is possible to predict sponsored and unpaid reviews using NLP with Machine Learning. This article examined various Machine Learning algorithms for predicting the sentiment of e-commerce website reviews. The automation achieves a maximum validation accuracy of 79.83% when using Fast Text as word embedding and the Multi-channel Convolution Neural Network.
An optimization-based machine learning technique for smart home security using 5G
Generally, cellular networks are divided into discrete geographic zones where a secure routing protocol is important. In this study, Sailfish-based Distributed IP Mobility Management (SbDMM) architecture for security protocol in a smart home using 5G is suggested. Smart homes first gathered data via IoT devices which are then communicated with the use of a Home Gateway (HGW). Mobile Nodes (MN) and Corresponding Nodes (CN) process data communication (CN). In addition, the acquired data are encrypted and secured using the session key. Additionally, use an authenticated key and a cipher key to secure the routing optimization. As a result, the fitness of sailfish is updated in a protocol path that is optimized for securing data from attackers. The designed framework is then implemented in Python and the obtained results are compared to those of other methodologies in terms of execution time, confidentiality rate, efficiency, delay, and task completion.
Machine Learning Technique for Precision Agriculture Applications in 5G-Based Internet of Things
Monitoring systems based on artificial intelligence (AI) and wireless sensors are in high demand and give exact data extraction and analysis. The main objective of this paper is to detect the most appropriate plant development parameters. This paper has the concept of reducing the hazards in agriculture and promoting intelligent farming. Advancement in agriculture is not new, but the AI-based wireless sensor will push intelligent agriculture to a new standard. The research goal of this work is to improve the prediction state using image processing-based machine learning techniques. The main objective of the paper, as described above, is to detect and control cotton leaf diseases. This paper comprises several aspects, including leaf disease detection, remote monitoring system depending on the server, moisture and temperature sensing, and soil sensing. Insects and pathogens are typically responsible for plant diseases that reduce productivity if not timely. This paper presents a method to monitor the soil quality and prevent cotton leaf diseases. The proposed system suggested uses a regression technique of artificial intelligence to identify and classify leaf diseases. The information would be delivered to farmers through the Android app after infection identification. The Android app also allows soil parameter values like moisture, humidity, and temperature to be displayed along with the chemical level in a container. The relay may be on/off to regulate the motor and chemical sprinkler system as required by using the Android app. In the proposed system, the SVM algorithm delivers the best accuracy in detecting various diseases and demonstrates its efficiency in the detection and control by the improvement of cultivation for the farmers.
Attention-based bidirectional-long short-term memory for abnormal human activity detection
Abnormal human behavior must be monitored and controlled in today’s technology-driven era, since it may cause damage to society in the form of assault or web-based violence, such as direct harm to a person or the propagation of hate crimes through the internet. Several authors have attempted to address this issue, but no one has yet come up with a solution that is both practical and workable. Recently, deep learning models have become popular as a means of handling massive amounts of data but their potential to categorize the aberrant human activity remains unexplored. Using a convolutional neural network (CNN), a bidirectional long short-term memory (Bi-LSTM), and an attention mechanism to pay attention to the unique spatiotemporal characteristics of raw video streams, a deep-learning approach has been implemented in the proposed framework to detect anomalous human activity. After analyzing the video, our suggested architecture can reliably assign an abnormal human behavior to its designated category. Analytic findings comparing the suggested architecture to state-of-the-art algorithms reveal an accuracy of 98.9%, 96.04%, and 61.04% using the UCF11, UCF50, and subUCF crime datasets, respectively.
Resource-Efficient Synthetic Data Generation for Performance Evaluation in Mobile Edge Computing Over 5G Networks
Mobile Edge Computing (MEC) in 5G networks has emerged as a promising technology to enable efficient and low-latency services for mobile users. In this paper, we present a novel synthetic data generation approach tailored for evaluating MEC in 5G networks. Our methodology incorporates resource-efficient techniques to generate realistic synthetic datasets that capture the spatio-temporal patterns of mobile traffic and user behavior. By leveraging advanced modeling techniques, including multi-head attention and bidirectional LSTM, we accurately model the complex dependencies in the data while optimizing computational resources. The proposed synthetic data generator enables the creation of diverse datasets that closely resemble real-world scenarios, facilitating the evaluation of MEC performance and optimizing resource utilization. Through extensive experiments and evaluations, we demonstrate the effectiveness of our approach in enabling accurate assessments of MEC in 5G networks. Our work contributes to the field by providing a robust methodology for synthetic data generation specifically tailored for MEC evaluation, addressing the need for resource-efficient evaluation frameworks in the context of emerging technologies. The results of our study provide valuable insights for the design and optimization of MEC systems in real-world deployments.
Improved Transportation Model with Internet of Things Using Artificial Intelligence Algorithm
In this paper, the application of transportation systems in real-time traffic conditions is evaluated with data handling representations. The proposed method is designed in such a way as to detect the number of loads that are present in a vehicle where functionality tasks are computed in the system. Compared to the existing approach, the design model in the proposed method is made by dividing the computing areas into several cluster regions, thereby reducing the complex monitoring system where control errors are minimized. Furthermore, a route management technique is combined with Artificial Intelligence (AI) algorithm to transmit the data to appropriate central servers. Therefore, the combined objective case studies are examined as minimization and maximization criteria, thus increasing the efficiency of the proposed method. Finally, four scenarios are chosen to investigate the projected design’s effectiveness. In all simulated metrics, the proposed approach provides better operational outcomes for an average percentage of 97, thereby reducing the amount of traffic in real-time conditions.
Digital Transformations in Medical Applications Using Audio and Virtual Reality Procedures
Numerous members of society struggle with health care issues, and despite the use of sensing technology, diseases in the body are still unable to be detected. The main cause of this identification process failure is the absence of any recognized virtual technology on the market. The majority of health care solicitations seek to create a specific application that simply delivers data on sensing values and ignores the virtual representation of those values. So, in order to detect the existence of viruses inside the body, this article offers an integration platform that links sensing devices with Virtual/Audio Reality (VR/AR) approaches. Additionally, a specific form of swarm intelligence algorithm known as Fruit Fly (FF) is used in the recognition process with a modified fitness function. The FF technique offers a lot of low layer awareness, which improves the output for efficient operation. The proposed AR/VR technique is used with biological sensors to analyze the real-time situations, and two case studies are divided. It is logical to conclude from the experimental results that all validated case studies offer excellent productivity and are adaptable to all environmental circumstances.
Deep Learning Approaches for Prognosis of Automated Skin Disease
Skin problems are among the most common ailments on Earth. Despite its popularity, assessing it is not easy because of the complexities in skin tones, hair colors, and hairstyles. Skin disorders provide a significant public health risk across the globe. They become dangerous when they enter the invasive phase. Dermatological illnesses are a significant concern for the medical community. Because of increased pollution and poor diet, the number of individuals with skin disorders is on the rise at an alarming rate. People often overlook the early signs of skin illness. The current approach for diagnosing and treating skin conditions relies on a biopsy process examined and administered by physicians. Human assessment can be avoided with a hybrid technique, thus providing hopeful findings on time. Approaches to a thorough investigation indicate that deep learning methods might be used to construct frameworks capable of identifying diverse skin conditions. Skin and non-skin tissue must be distinguished to detect skin diseases. This research developed a skin disease classification system using MobileNetV2 and LSTM. For this system, accuracy in skin disease forecasting is the primary aim while ensuring excellent efficiency in storing complete state information for exact forecasts.
The Internet of Things (IoT) technology in various applications used in data processing systems requires high security because more data must be saved in cloud monitoring systems. Even though numerous procedures are in place to increase the security and dependability of data in IoT applications, the majority of outside users can decode any transferred data at any time. Therefore, it is essential to include data blocks that, under any circumstance, other external users cannot understand. The major significance of proposed method is to incorporate an offloading technique for data processing that is carried out by using block chain technique where complete security is assured for each data. Since a problem methodology is designed with respect to clusters a load balancing technique is incorporated with data weights where parametric evaluations are made in real time to determine the consistency of each data that is monitored with IoT. The examined outcomes with five scenarios process that projected model on offloading analysis with block chain proves to be more secured thereby increasing the accuracy of data processing for each IoT applications to 89%.
From Hype to Reality: Unveiling the Promises, Challenges and Opportunities of Blockchain in Supply Chain Systems
Blockchain is a groundbreaking technology widely adopted in industrial applications for improving supply chain management (SCM). The SCM and logistics communities have paid close attention to the development of blockchain technology. The primary purpose of employing a blockchain for SCM is to lower production costs while enhancing the system’s security. In recent years, blockchain-related SCM research has drawn much interest, and it is fair to state that this technology is now the most promising option for delivering reliable services/goods in supply chain networks. This study uses rigorous methods to review the technical implementation aspects of SCM systems driven by Blockchain. To ensure the security of industrial applications, we primarily concentrated on developing SCM solutions with blockchain capabilities. In this study, the unique qualities of blockchain technology have been exploited to analyze the main effects of leveraging it in the SCM. Several security metrics are utilized to validate and compare the blockchain methodologies’ effectiveness in SCM. The blockchain may alter the supply chain to make it more transparent and efficient by creating a useful tool for strategic planning and enhancing connections among the customers, suppliers, and accelerators. Moreover, the performance of traditional and blockchain-enabled SCM systems is compared in this study based on the parameters of efficiency, execution time, security level, and latency.
A Radical Safety Measure for Identifying Environmental Changes Using Machine Learning Algorithms
Due to air pollution, pollutants that harm humans and other species, as well as the environment and natural resources, can be detected in the atmosphere. In real-world applications, the following impurities that are caused due to smog, nicotine, bacteria, yeast, biogas, and carbon dioxide occur uninterruptedly and give rise to unavoidable pollutants. Weather, transportation, and the combustion of fossil fuels are all factors that contribute to air pollution. Uncontrolled fire in parts of grasslands and unmanaged construction projects are two factors that contribute to air pollution. The challenge of assessing contaminated air is critical. Machine learning algorithms are used to forecast the surroundings if any pollution level exceeds the corresponding limit. As a result, in the proposed method air pollution levels are predicted using a machine learning technique where a computer-aided procedure is employed in the process of developing technological aspects to estimate harmful element levels with 99.99% accuracy. Some of the models used to enhance forecasts are Mean Square Error (MSE), Coefficient of Determination Error (CDE), and R Square Error (RSE).
Implementation of Internet of Things With Blockchain Using Machine Learning Algorithm
In recent days, all networks are connected by internet and all people around the world are able to control things in their remote locations. Even though these technologies have been used only by selected people, it is definite in the future that all people will use this technology and they will move towards building smart cities, homes, and industries. However, when advanced technologies are created, people always worry about security when they move towards smart environment. If the internet is connected to their home then much valuable information in their home can also be sent through smart devices. Therefore, for this IoT-based technology, a blockchain-based method can be introduced where more security for data transfer process can be provided. Also, these technologies have to work efficiently by integrating a new artificial intelligence-based machine learning algorithm. Therefore, for this, a deep learning model will be integrated, thus providing effective data transfer from transmitter to receiver.
A comparative recognition research on excretory organism in medical applications using artificial neural networks
Purpose: In the contemporary era, a significant number of individuals encounter various health issues, including digestive system ailments, even during their advanced years. The major purpose of this study is based on certain observations that are made in internal digestive systems in order to prevent severe cause that usually occurs in elderly people. Approach: To solve the purpose of the proposed method the proposed system is introduced with advanced features and parametric monitoring system that are based on wireless sensor setups. The parametric monitoring system is integrated with neural network where certain control actions are taken to prevent gastrointestinal activities at reduced data loss. Results: The outcome of the combined process is examined based on four different cases that is designed based on analytical model where control parameters and weight establishments are also determined. As the internal digestive system is monitored the data loss that is present with wireless sensor network must be reduced and proposed approach prevents such data loss with an optimized value of 1.39%. Conclusion: Parametric cases were conducted to evaluate the efficacy of neural networks. The findings indicate a significantly higher effectiveness rate of approximately 68% when compared to the control cases.
Pneumonia detection with QCSA network on chest X-ray
Worldwide, pneumonia is the leading cause of infant mortality. Experienced radiologists use chest X-rays to diagnose pneumonia and other respiratory diseases. The diagnostic procedure's complexity causes radiologists to disagree with the decision. Early diagnosis is the only feasible strategy for mitigating the disease's impact on the patent. Computer-aided diagnostics improve the accuracy of diagnosis. Recent studies established that Quaternion neural networks classify and predict better than real-valued neural networks, especially when dealing with multi-dimensional or multi-channel input. The attention mechanism has been derived from the human brain's visual and cognitive ability in which it focuses on some portion of the image and ignores the rest portion of the image. The attention mechanism maximizes the usage of the image's relevant aspects, hence boosting classification accuracy. In the current work, we propose a QCSA network (Quaternion Channel-Spatial Attention Network) by combining the spatial and channel attention mechanism with Quaternion residual network to classify chest X-Ray images for Pneumonia detection. We used a Kaggle X-ray dataset. The suggested architecture achieved 94.53% accuracy and 0.89 AUC. We have also shown that performance improves by integrating the attention mechanism in QCNN. Our results indicate that our approach to detecting pneumonia is promising.
Healthcare Data Security Using IoT Sensors Based on Random Hashing Mechanism
Providing security to the healthcare data stored in an IoT-cloud environment is one of the most challenging and demanding tasks in recent days. Because the IoT-cloud framework is constructed with an enormous number of sensors that are used to generate a massive amount of data, however, it is more susceptible to vulnerabilities and attacks, which degrades the security level of the network by performing malicious activities. Hence, Artificial Intelligence (AI) technology is the most suitable option for healthcare applications because it provides the best solution for improving the security and reliability of data. Due to this fact, various AI-based security mechanisms are implemented in the conventional works for the IoT-cloud framework. However, it faces significant problems of increased complexity in algorithm design, inefficient data handling, not being suitable for processing the unstructured data, increased cost of IoT sensors, and more time consumption. Therefore, this paper proposed an AI-based intelligent feature learning mechanism named Probabilistic Super Learning- (PSL-) Random Hashing (RH) for improving the security of healthcare data stored in IoT-cloud. Also, this paper is aimed at reducing the cost of IoT sensors by implementing the proposed learning model. Here, the training model has been maintained for detecting the attacks at the initial stage, where the properties of the reported attack are updated for learning the characteristics of attacks. In addition to that, the random key is generated based on the hash value of the data matrix, which is incorporated with the standard Elliptic Curve Cryptography (ECC) technique for data security. Then, the enhanced ECC-RH mechanism performs the data encryption and decryption processes with the generated random hash key. During performance evaluation, the results of both existing and proposed techniques are validated and compared using different performance indicators.
An Innovative Perceptual Pigeon Galvanized Optimization (PPGO) Based Likelihood Naïve Bayes (LNB) Classification Approach for Network Intrusion Detection System
Intrusion detection and classification have gained significant attention recently due to the increased utilization of networks. For this purpose, there are different types of Network Intrusion Detection System (NIDS) approaches developed in the conventional works, which mainly focus on identifying the intrusions from the datasets with the help of classification techniques. Still, it is limited by the significant problems of inefficiency in handling large dimensional datasets, high computational complexity, false detection, and more time consumption for training the models. To solve these problems, this research intends to develop an innovative clustering-based classification methodology to precisely detect intrusions from the different types of IDS datasets. Here, the most recent and extensively used IDS datasets such as NSL-KDD, CICIDS, and Bot-IoT have been employed for detecting intrusions. Data preprocessing has been performed to normalize the dataset to eliminate irrelevant attributes and organize the features. Then, the data separation is applied by forming the clusters by using an intelligent Anticipated Distance-based Clustering (ADC) incorporated with the Density-Based Spatial clustering of applications with noise (DBScan) algorithm. It helps to find the distance and density measures for grouping the attributes into the clusters, which increases the efficiency of classification. Here, the most suitable optimal parameters are selected using the Perpetual Pigeon Galvanized Optimization (PPGO) technique. The extracted features are used for training and testing the dataset samples. Consequently, the Likelihood Naïve Bayes (LNB) classification approach is implemented to accurately predict the classified label as to whether normal or attack. During the evaluation, the performance of the proposed IDS framework is validated and compared using various evaluation metrics. The results show that the proposed ADC-DBScan-LNB model outperforms the other techniques with improved performance outcomes.
Circuit Manufacturing Defect Detection Using VGG16 Convolutional Neural Networks
Manufacturing, one of the most valuable industries in the world, is boundlessly automatable yet still quite stuck in traditionally manual and slow processes. Industry 4.0 is racing to define a new era of digital manufacturing through Internet of Things- (IoT-) connected machines and factory systems, fully comprehensive data gathering, and seamless implementation of data-driven decision-making and action taking. Both academia and industry understand the tremendous value in modernizing manufacturing and are pioneering bleeding-edge strides every day to optimize one of the largest industries in the world. IoT production, functional testing, and fault detection equipment are already being used in today’s maturing smart factory paradigm to superintend intelligent manufacturing equipment and perform automated defect detection in order to enhance production quality and efficiency. This paper presents a powerful and precise computer vision model for automated classification of defect product from standard product. Human operators and inspectors without digital aid must spend inordinate amounts of time poring over visual data, especially in high volume production environments. Our model works quickly and accurately in sparing defective product from entering doomed operations that would otherwise incur waste in the form of wasted worker-hours, tardy disposition, and field failure. We use a convolutional neural network (CNN) with the Visual Geometry Group with 16 layers (VGG16) architecture and train it on the Printed Circuit Board (PCB) dataset with 3175 RBG images. The resultant trained model, assisted by finely tuned optimizers and learning rates, classifies defective product with 97.01% validating accuracy.
Grow of Artificial Intelligence to Challenge Security in IoT Application
Artificial intelligence is intelligence revealed by software, as opposed to natural intelligence. It is the science and technology of intelligent machinery. It is also a technology that functions like people on the computer. The IoT Internet of Things is a web-based object network that can communicate and share data. AI and IoT are combined to achieve a more effective IoT process, namely AIoT, combined with the Internet and artificial intelligence. Recently, an efficient health care system was introduced with artificial intelligence (AI) and IoT research. In this paper, the usability of artificial intelligence is discussed, and the implementation of AI and IoT analytics are systematically examined as a way of enhancing the health system in the IoT model. Different AI-based and device algorithms are also explored. Edge Computing is a modern computer technology in which data are processed from the edge. Simulation result shows the accuracy, precision and specificity of decision tree approach than SVM and Naïve Bayes. It offers lower bandwidth costs, more robust privacy and data security than cloud computing. Notably, advanced computing is easily used by artificial intelligence technologies.
An Authentication Model with High Security for Cloud Database
The cloud computing standards are gaining an increased research interest due to various benefits they offer. Though there are so many influences with cloud computing, security and privacy problems are various issues handling with the extensive adaption by the model. Malicious problem of service provider is one more issue which cannot be traceable by data proprietors. Hence, finding the appropriate solutions to these security issues at both administrator level and customer level is very attractive in various directions. Cryptographically enforced access control for securing electronic pathological records (CEASE) is formulated by extending the proposed ciphertext-based attribute-based encryption (CP-ABE) with advanced encryption standard (AES) through limited-shuffle techniques. The main objective of CEASE is to provide data confidentiality, and access control limited-shuffle protects the data from inference attacks and protects the data confidentiality for hot data. In the next step, this research works design a multistage encrypt-or model by differentiating the users as public and personal. Two separate algorithms such as Vigenere encryption algorithm and two-fish encryption are applied in personal and public domain, respectively. Further, where, hierarchical agglomerative clustering (HAC) algorithm is also processed for clustering of users in the public domain by which the overhead decreases effectively. As a final system, this work develops an integrated framework by combining the CP-ABE with AES, multistage encryptor and limited-shuffle. As it is combined with individual methods, this method achieves an efficient performance in the provision of security and data confidentiality.
The Internet of Things (IoT) involves the gathering of all those devices that connect to the Internet with the purpose of collecting and sharing data. The application of IoT in the different sectors, including health, industry has also picked up the threads to augment over the past few years. The IoT and, by integrity, the IIoT, are found to be highly susceptible to different types of threats and attacks owing to the networks nature that in turn leads to even poor outcomes (i.e., increasing error rate). Hence, it is critical to design attack detection systems that can provide the security of IIoT networks. To overcome this research work of IIoT attack detection in large amount of evolutions is failed to determine the certain attacks resulting in a minimum detection performance, reinforcement learning-based attack detection method called sliding principal component and dynamic reward reinforcement learning (SPC-DRRL) for detecting various IIoT network attacks is introduced. In the first stage of this research methodology, preprocessing of raw TON_IoT dataset is performed by employing min-max normalization scaling function to obtain normalized values with same scale. Next, with the processed sample data as output, to extract data from multi-sources (i.e., different service profiles from the dataset), a robust log likelihood sliding principal component-based feature extraction algorithm is applied with an arbitrary size sliding window to extract computationally-efficient features. Finally, dynamic reward reinforcement learning-based IIoT attack detection model is presented to control the error rate involved in the design. Here, with the design of dynamic reward function and introducing incident repository that not only generates the reward function in an arbitrary fashion but also stores the action results in the incident repository for the next training, therefore reducing the attack detection error rate. Moreover, an IIoT attack detection system based on SPC-DRRL is constructed. Finally, we verify the algorithm on the ToN_IoT dataset of University of New South Wales Australia. The experimental results show that the IIoT attack detection time and overhead along with the error rate are reduced considerably with higher accuracy than that of traditional reinforcement learning methods.
The dynamic connectivity and functionality of sensors has revolutionized remote monitoring applications thanks to the combination of IoT and wireless sensor networks (WSNs). Wearable wireless medical sensor nodes allow continuous monitoring by amassing physiological data, which is very useful in healthcare applications. These text data are then sent to doctors via IoT devices so they can make an accurate diagnosis as soon as possible. However, the transmission of medical text data is extremely vulnerable to security and privacy assaults due to the open nature of the underlying communication medium. Therefore, a certificate-less aggregation-based signature system has been proposed as a solution to the issue by using elliptic curve public key cryptography (ECC) which allows for a highly effective technique. The cost of computing has been reduced by 93% due to the incorporation of aggregation technology. The communication cost is 400 bits which is a significant reduction when compared with its counterparts. The results of the security analysis show that the scheme is robust against forging, tampering, and man-in-the-middle attacks. The primary innovation is that the time required for signature verification can be reduced by using point addition and aggregation. In addition, it does away with the reliance on a centralized medical server in order to do verification. By taking a distributed approach, it is able to fully preserve user privacy, proving its superiority.
The security of the Internet of Things (IoT) is crucial in various application platforms, such as the smart city monitoring system, which encompasses comprehensive monitoring of various conditions. Therefore, this study conducts an analysis on the utilization of blockchain technology for the purpose of monitoring Internet of Things (IoT) systems. The analysis is carried out by employing parametric objective functions. In the context of the Internet of Things (IoT), it is imperative to establish well-defined intervals for job execution, ensuring that the completion status of each action is promptly monitored and assessed. The major significance of proposed method is to integrate a blockchain technique with neuro-fuzzy algorithm thereby improving the security of data processing units in all smart city applications. As the entire process is carried out with IoT the security of data in both processing and storage units are not secured therefore confidence level of monitoring units are maximized at each state. Due to the integration process the proposed system model is implemented with minimum energy conservation where 93% of tasks are completed with improved security for about 90%.
Correction: A full privacy-preserving distributed batch-based certificate-less aggregate signature authentication scheme for healthcare wearable wireless medical sensor networks (HWMSNs)
Correction: International Journal of Information Securityhttps://doi.org/10.1007/s10207-023-00748-1. In the original publication, last author name in the author group was published with typo error. The correct author name is "Mueen Uddin". This has been corrected in the original article.
A Hybrid Quantum-classical Fusion Neural Network to Improve Protein-ligand Binding Affinity Predictions for Drug Discovery
Drug discovery hinges on the accurate prediction of binding affinity between prospective drug molecules and target proteins that influence disease progression, which is financially and computationally demanding. Although classical and hybrid quantum machine learning models have been employed in previous studies to aid in binding affinity prediction, they encounter several issues related to convergence stability and prediction accuracy. In this regard, this paper introduces a novel hybrid quantum-classical deep learning model tailored for binding affinity prediction in drug discovery. Specifically, the proposed model synergistically integrates 3D and spatial graph convolutional neural networks within an optimized quantum circuit architecture. Simulation results demonstrate a 6% improvement in prediction accuracy relative to existing classical models, as well as a significantly more stable convergence performance compared to previous classical approaches. Moreover, to scalably deploy the proposed framework over today's noisy intermediate-scale quantum (NISQ) devices, a novel quantum error mitigation algorithm is proposed. This algorithm outperforms existing techniques and is capable of mitigating errors with gate noise probabilities, p ≤ 0.05, while resulting in no additional overhead during the training and testing phases.
Reliable Secured Consumer IIoT Framework With Multi Layer Attack Interpretation and Prevention
Sustainable development and evolution of Industry 5.0 paved the way for the commercial and technical enhancement of the Industrial Internet of Things (IIoT) sensors. They are challenged by various factors such as privacy, authentication, security, data processing, and sustainability requirements. The proposed work is developed based on the multi-class classification models of Artificial Intelligence (AI) to handle security attacks and vulnerabilities as Intrusion Detection System(IDS). The proposed model is built with various Machine Learning (ML) models such as Decision Tree, Random Forest, Multinomial Naive Bayes, and Gradient Boosting Classifier. The Gradient Boosting Algorithm, compared with the other models, provided the best accuracy of 0.94, the classification probability of which is used in the development of the Local Interpretable Model-Agnostic Explainer (LIME). The random forest model, which provided the next best accuracy of 0.93, was applied for the explanation of the global surrogacy with the SHapley Additive exPlanations (SHAP) Explainer. These two models interpret the feature relationships, weights, and influence on the target estimation in both the local and the global perspective. In comparison with the existing research, the proposed framework provides an increase of around 1% in accuracy, 1.2% in precision, 1.1% in recall, and 1.14% in f1-score. Explainable Artificial Intelligence (XAI) enhances trust and reliability in AI predictions by providing a clear explanation of how the model detects attacks, making it reliable to understand, trust, and apply the predictions in real-time IIoT environments.
An efficient patient’s response predicting system using multi-scale dilated ensemble network framework with optimization strategy
The forecasting of a patient’s response to radiotherapy and the likelihood of experiencing harmful long-term health impacts would considerably enhance individual treatment plans. Due to the continuous exposure to radiation, cardiovascular disease and pulmonary fibrosis might occur. For forecasting the response of patients to chemotherapy, the Convolutional Neural Networks (CNN) technique is widely used. With the help of radiotherapy, cancer diseases are diagnosed, but some patients suffer from side effects. The toxicity of radiotherapy and chemotherapy should be estimated. For validating the patient’s improvement in treatments, a patient response prediction system is essential. In this paper, a Deep Learning (DL) based patient response prediction system is developed to effectively predict the response of patients, predict prognosis and inform the treatment plans in the early stage. The necessary data for the response prediction are collected manually. The collected data are then processed through the feature selection segment. The Repeated Exploration and Exploitation-based Coati Optimization Algorithm (REE-COA) is employed to select the features. The selected weight features are input into the prediction process. Here, the prediction is performed by Multi-scale Dilated Ensemble Network (MDEN), where we integrated Long-Short term Memory (LSTM), Recurrent Neural Network (RNN) and One-dimensional Convolutional Neural Networks (1DCNN). The final prediction scores are averaged to develop an effective MDEN-based model to predict the patient’s response. The proposed MDEN-based patient’s response prediction scheme is 0.79%, 2.98%, 2.21% and 1.40% finer than RAN, RNN, LSTM and 1DCNN, respectively. Hence, the proposed system minimizes error rates and enhances accuracy using a weight optimization technique.
Correct detection of floating objects in complex water environments is a challenge because of the problems of obscuration and dense floating objects. In view of the above issues, this paper proposed a network called EC-YOLOX by introducing the CA (Coordinate Attention) and ECA (Efficient Channel Attention) mechanism and improving the loss function to further the multi-feature extraction and detection accuracy of floating objects. In this paper, ablation experiments and comparison experiments were conducted on the river floating objects dataset. The ablation experiments showed that the ECA and CA mechanism played a great role in EC-YOLOX, which can reduce the miss detection rate by 5.86% and increase the mAP by 5.53% compared with YOLOX. The EC-YOLOX was also applicable to different types of floating objects; the mAP of the ball, plastic-garbage, plastic-bag, leaf, milk-box, grass, and branches were respectively improved by 4%, 4%, 4%, 6%, 4%, 18%, and 5%. The mAP of the comparison experiments was improved by 15.13%, 9.30%, and 8.03% compared to Faster R-CNN, YOLOv5, and YOLOv3, respectively. This method facilitates the precise extraction of floating objects from images, which holds paramount importance for monitoring and safeguarding water environments. It offers significant contributions to water environment monitoring and protection.
Considering SCADA systems operate and manage critical infrastructure and industrial processes, the need for robust intrusion detection systems-IDSs cannot be overemphasized. The complexity of these systems, added to their increased exposure to more sophisticated cyber-attacks, creates significant challenges for continuous, secure operations. Traditional approaches to intrusion detection usually fail to cope, scale, or be as accurate as is necessary when dealing with the modern, multi-faceted problem of an attack vector against SCADA networks and IIoT environments. Past works have generally proposed the use of different machine learning and deep learning anomaly detection strategies to find possible intrusions. While these methods have, in fact, been promising, their effects are not without their own set of problems, including high false positives, poor generalization to new types of attacks, and performance inefficiencies in large-scale data environments. In this work, against this background, two novel IDS models are put forward: SPARK (Scalable Predictive Anomaly Response Kernel) and SAD (Scented Alpine Descent), to further improve the security landscape in SCADA systems. SPARK enables an ensemble-based deep learning framework combining strategic feature extraction with adaptive learning mechanisms for volume data processing at high accuracy and efficiency. This architecture has stringent anomaly detection through a multi-layered deep network adapting to ever-evolving contexts in operational environments, allowing for low latency and high precision in the detections. The SAD model works in concert with SPARK by adopting a synergistic approach that embeds deep learning into anomaly scoring algorithms, enabled to detect subtle attack patterns and further reduce false-positive rates.
IoT-based prediction model for aquaponic fish pond water quality using multiscale feature fusion with convolutional autoencoder and GRU networks
The Internet of Things (IoT)-based smart solutions have been developed to predict water quality and they are becoming an increasingly important means of providing efficient solutions through communication technologies. IoT systems are used for enabling connection between various devices based on the ability to gather and collect information. Furthermore, IoT systems are designed to address the environment and the automation industry. The threats associated with aquaponics farming are managed through an IoT-based smart water monitoring framework, which has become increasingly relevant in recent days. Therefore, this approach is crucial for achieving a remarkable improvement in order to increase the productivity rate and yield. The quality of water directly affects the rate of growth, efficiency of feed, and the overall health rate of the fish, plants, and bacteria. Insufficient knowledge about species selection poses a significant challenge in aquaponics farming, as it heavily relies on the water quality parameters. To address the challenges of conventional models, we have developed an effective IoT-based water quality prediction model, more specifically designed for aquaponic fish ponds. The data needed to perform the developed water quality prediction model will be acquired from “a simple dataset of aquaponic fish pond IoT” database. After that, these data are forwarded to the feature extraction phase. The weighted features, DBN (Deep Belief Network) features, and the original features are achieved in the feature extraction stage. The weighted features are obtained using the Revamped Fitness-based Mother Optimization Algorithm (RF-MOA). Subsequently, these extracted features are fed into the Multi-Scale feature fusion-based Convolutional Autoencoder with a Gated Recurrent Unit (MS-CAGRU) network for predicting the water quality. Thus, the water quality predicted data is obtained. The proposed model integrates GRU networks with a convolutional autoencoder to improve water quality prediction by capturing trends and managing temporal dependencies. It enhances accuracy by analysing key parameters and employing techniques to reduce overfitting. The effectiveness of the proposed system is evaluated in comparison to the traditional models using some evaluation measures.
Comparative analysis of deep learning models for crack detection in buildings
Life-time of the buildings is generally challenged by the act of nature. In-spite of the fact that the constructions provide minimum guarantee on quality and durability, certain mismatch in the composition of the materials, stress on the building, and chemical or physical imbalance of the materials, lead to surface crack. Cracks are also generated due to the shuffle of climatic conditions, which leads to the contraction and expansion of the building surfaces, and other damages. The guarantee on building safety and serviceability depends on how these buildings are successfully assessed and maintained. The development of Artificial Intelligence (AI) techniques, provide favourable solutions in-order to handle, manage and solve building cracks, through analysis using deep image neural network models, that perform classification of the building with crack images. As a result, a critical challenge for many civil engineering applications is the precise, quick, and automated identification of cracks on structural surfaces is addressed with the solutions provided by the deep image neural networks. In this research, we tackle the research gap and data scarcity by developing and curating a novel deep learning image processing for detecting cracks in brickwork. We also train and validate several deep learning models to classify brickwork images as either cracked or normal. The dataset of the proposed work contains 24,000 images which are classified through binary classes. These classes are generated for crack and non-crack images. The various parameters such as Batch size, Pooling, Activation functions Learning-rate, Kernel-Size, Normalization, and Optimizers are used for the evaluation of the model. The proposed work performs a comparative analysis of four deep image models such as Inception V3, VGG-16, RESNET-50 VGG-19, Inception ResNetV2 and CNN-RES MLP. With the analysis of all these models, the Inception V3 provides the best of all with the accuracy value of 99.98%. The InceptionV3 tops the Precision value of 99.99% and RESNET-50 tops the Recall value of 99.98%. The IncpetionV2 provided the best of the Region of Convergence value of 0.9999 which is the best among all the models for reliable and stable performance.
An intelligent attention based deep convoluted learning (IADCL) model for smart healthcare security
In recent times, there has been rapid growth of technologies that have enabled smart infrastructures-IoT-powered smart grids, cities, and healthcare systems. But these resource-constrained IoT devices cannot be protected by existing security mechanisms against emerging cyber threats. The aim of the paper is to present an improved security for smart healthcare IoT systems by developing an architecture for IADCL. The proposed system employs publicly available datasets such as CIC-IDS 2017, CIC-IDS 2018, CIC-Bell DNS 2021, and NSL-KDD to present a robust detection framework. IRKO selects features, reducing the feature dimensions and hence isolating the most relevant attributes. The AConBN classifier then accurately classifies normal and intrusion traffic. Afterwards, optimization in the classification process is done by the SA-HHO algorithm, which provides the optimal weight values. Results are such that the IADCL framework detects cyberattacks with a high degree of accuracy, and the performance evaluations are made based on a number of key performance metrics. Conclusively, the proposed system has very good potential to protect smart healthcare IoT devices from cyber threats.
SRADHO: statistical reduction approach with deep hyper optimization for disease classification using artificial intelligence
Artificial Intelligence techniques are being used to analyse vast amounts of medical data and assist in the accurate and early diagnosis of diseases. The common brain related diseases are faced by most of the people which affects the structure and function of the brain. Artificial neural networks have been extensively used for disease prediction and diagnosis due to their ability to learn complex patterns and relationships from large datasets. However, there are some problems like over-fitting, under-fitting, vanishing gradient and increased elapsed time occurred in the course of data analysis and prediction which results in performance degradation of the model. Therefore, a complex structure perception is much essential by avoiding over-fitting and under-fitting. This empirical study presents a statistical reduction approach along with deep hyper optimization (SRADHO) technique for better feature selection and disease classification with reduced elapsed time. Deep hyper optimization combines deep learning models with hyperparameter tuning to automatically identify the most relevant features, optimizing model accuracy and reducing dimensionality. SRADHO is used to calibrate the weight, bias and select the optimal number of hyperparameters in the hidden layer using Bayesian optimization approach. Bayesian optimization uses a probabilistic model to efficiently search the hyperparameter space, identifying configurations that maximize model performance while minimizing the number of evaluations. Three benchmark datasets and the classifier models logistic regression, decision tree, random forest, K-nearest neighbour, support vector machine and Naïve Bayes are used for experimentation. The proposed SRADHO algorithm achieves 98.2% of accuracy, 97.2% of precision rate, 98.3% of recall rate and 98.1% of F1-Score value with 0.3% of error rate. The execution time for SRADHO algorithm is 12 s.
Alzheimer’s Disease (AD) causes slow death in brain cells due to shrinkage of brain cells which is more prevalent in older people. In most cases, the symptoms of AD are mistaken as age-related stresses. The most widely utilized method to detect AD is Magnetic Resonance Imaging (MRI). Along with Artificial Intelligence (AI) techniques, the efficacy of identifying diseases related to the brain has become easier. But, the identical phenotype makes it challenging to identify the disease from the neuro-images. Hence, a deep learning method to detect AD at the beginning stage is suggested in this work. The newly implemented “Enhanced Residual Attention with Bi-directional Long Short-Term Memory (Bi-LSTM) (ERABi-LNet)” is used in the detection phase to identify the AD from the MRI images. This model is used for enhancing the performance of the Alzheimer’s detection in scale of 2–5%, minimizing the error rates, increasing the balance of the model, so that the multi-class problems are supported. At first, MRI images are given to “Residual Attention Network (RAN)”, which is specially developed with three convolutional layers, namely atrous, dilated and Depth-Wise Separable (DWS), to obtain the relevant attributes. The most appropriate attributes are determined by these layers, and subjected to target-based fusion. Then the fused attributes are fed into the “Attention-based Bi-LSTM”. The final outcome is obtained from this unit. The detection efficiency based on median is 26.37% and accuracy is 97.367% obtained by tuning the parameters in the ERABi-LNet with the help of Modified Search and Rescue Operations (MCDMR-SRO). The obtained results are compared with ROA-ERABi-LNet, EOO-ERABi-LNet, GTBO-ERABi-LNet and SRO-ERABi-LNet respectively. The ERABi_LNet thus provides enhanced accuracy and other performance metrics compared to such deep learning models. The proposed method has the better sensitivity, specificity, F1-Score and False Positive Rate compared with all the above mentioned competing models with values such as 97.49%.97.84%,97.74% and 2.616 respective;y. This ensures that the model has better learning capabilities and provides lesser false positives with balanced prediction.
A novel skin cancer detection model using modified finch deep CNN classifier model
Skin cancer is one of the most life-threatening diseases caused by the abnormal growth of the skin cells, when exposed to ultraviolet radiation. Early detection seems to be more crucial for reducing aberrant cell proliferation because the mortality rate is rapidly rising. Although multiple researches are available based on the skin cancer detection, there still exists challenges in improving the accuracy, reducing the computational time and so on. In this research, a novel skin cancer detection is performed using a modified falcon finch deep Convolutional neural network classifier (Modified Falcon finch deep CNN) that efficiently detects the disease with higher efficiency. The usage of modified falcon finch deep CNN classifier effectively analyzed the information relevant to the skin cancer and the errors are also minimized. The inclusion of the falcon finch optimization in the deep CNN classifier is necessary for efficient parameter tuning. This tuning enhanced the robustness and boosted the convergence of the classifier that detects the skin cancer in less stipulated time. The modified falcon finch deep CNN classifier achieved accuracy, sensitivity, and specificity values of 93.59%, 92.14%, and 95.22% regarding k-fold and 96.52%, 96.69%, and 96.54% regarding training percentage, proving more effective than literary works.
An analytical framework for the industrial internet of things (IIoT): Importance, recent challenges, and enabling technologies
The Internet of Things (IoT) is a novel idea that can benefit any manufacturing company that adopts it. IoT is still in its early stages in industrial operations, leading to higher prices, slower development in data management, and fewer deployments. The proliferation of Internet of Things (IoT) applications and the adoption of cutting-edge technology trends in industrial systems are driving the development of industrial IoT (IIoT). A novel vision of the Internet of Things applied to the manufacturing sector is realized when smart things are used to automatically detect, gather, process, and communicate real-time events in industrial processes. By creating smart monitoring of production floor shops and machine health applications and for predictive and preventative maintenance of industrial equipment, the industrial Internet of Things (IIoT) seeks to enhance operational efficiency, productivity, and the management of industrial assets. Due to the proliferation of IoT (Internet of Things) applications that gather information from real and virtual sensors, massive amounts of digital data are becoming increasingly important. However, without the right tools, such information is useless. This research provides a novel and concise definition of IIoT that can aid readers in their understanding of this emerging field. Current research trends in the II T have been outlined. We conclude by outlining the current issues and enabling technologies for the IIoT.
Industrial advancements and utilization of large amount of fossil fuels, vehicle pollution, and other calamities increases the Air Quality Index (AQI) of major cities in a drastic manner. Major cities AQI analysis is essential so that the government can take proper preventive, proactive measures to reduce air pollution. This research incorporates artificial intelligence in AQI prediction based on air pollution data. An optimized machine learning model which combines Grey Wolf Optimization (GWO) with the Decision Tree (DT) algorithm for accurate prediction of AQI in major cities of India. Air quality data available in the Kaggle repository is used for experimentation, and major cities like Delhi, Hyderabad, Kolkata, Bangalore, Visakhapatnam, and Chennai are considered for analysis. The proposed model performance is experimentally verified through metrics like R-Square, RMSE, MSE, MAE, and accuracy. Existing machine learning models, like k-nearest Neighbor, Random Forest regressor, and Support vector regressor, are compared with the proposed model. The proposed model attains better prediction performance compared to traditional machine learning algorithms with maximum accuracy of 88.98% for New Delhi city, 91.49% for Bangalore city, 94.48% for Kolkata, 97.66% for Hyderabad, 95.22% for Chennai and 97.68% for Visakhapatnam city.
Pneumonia is a widespread and acute respiratory infection that impacts people of all ages. Early detection and treatment of pneumonia are essential for avoiding complications and enhancing clinical results. We can reduce mortality, improve healthcare efficiency, and contribute to the global battle against a disease that has plagued humanity for centuries by devising and deploying effective detection methods. Detecting pneumonia is not only a medical necessity but also a humanitarian imperative and a technological frontier. Chest X-rays are a frequently used imaging modality for diagnosing pneumonia. This paper examines in detail a cutting-edge method for detecting pneumonia implemented on the Vision Transformer (ViT) architecture on a public dataset of chest X-rays available on Kaggle. To acquire global context and spatial relationships from chest X-ray images, the proposed framework deploys the ViT model, which integrates self-attention mechanisms and transformer architecture. According to our experimentation with the proposed Vision Transformer-based framework, it achieves a higher accuracy of 97.61%, sensitivity of 95%, and specificity of 98% in detecting pneumonia from chest X-rays. The ViT model is preferable for capturing global context, comprehending spatial relationships, and processing images that have different resolutions. The framework establishes its efficacy as a robust pneumonia detection solution by surpassing convolutional neural network (CNN) based architectures.
Enhancing Elderly Health Monitoring Framework With Quantum-Assisted Machine Learning Models as Micro Services
Monitoring systems for the elderly gather a variety of information, including blood pressure, insulin level, oxygen saturation, and more. Machine learning is a multidisciplinary method for identifying patterns in data by applying mathematical algorithms and iterative computing processes. Machine learning models are implemented as microservice-based architecture, which makes code components more maintainable, testable, and of course, responsive. The supervised model, unsupervised model, and reinforcement model are the three machine learning models that are employed as micro-services independently. This study focuses on blood sugar level among other indicators used to monitor older people, because it is the primary factor determining how well each organ functions. In this work, the machine learning model is enhanced with quantum variationally algorithm to improve their efficiency and accuracy. With an accuracy rate of 81%, the quantum assisted unsupervised model performed better than the other two models when it was being executed.
Identifying objects in aircraft monitoring systems poses significant challenges due to the presence of extreme loading conditions. Despite the presence of several sensor units, the transmission of precise data to multiple data units is hindered by an increase in time intervals. Therefore, the suggested methodology is specifically developed for the purpose of generating digital replicas for aeronautical applications, wherein an aero transfer function is correlated with the digital twins. Mapping functions are utilized in the monitoring of diverse parameters that are associated with the identification of objects inside data transmission networks, with the aim of minimizing uncertainty. The suggested system model is enhanced by incorporating analytical representations and deep learning methods, resulting in the provision of zero point twin functionalities. The present study investigates the aforementioned integrated procedure through the analysis of four different situations. In these settings, an aero communication tool box is employed to transform the device configuration into simulation outputs. The results obtained from the comparison of these scenarios reveal that the projected model significantly enhances the maintenance period while minimizing data errors.
Latent Vector Optimization-Based Generative Image Steganography for Consumer Electronic Applications
In consumer electronic applications, to transmit secret images securely, it is required to explore the advanced covert communication technology, i.e., Generative Image Steganography (GIS). However, the existing GIS schemes suffer from the issues of poor stego-image quality and limited hiding capacity. Consequently, these GIS schemes cannot meet the requirements of consumer electronic applications, in which massive secret information needs to be transmitted securely. To address the above issues, we propose a Latent Vector Optimization (LVO)-based GIS scheme, in which the information hiding is implemented by the flow-based generative model during the image generation. Specifically, the LVO algorithm is introduced to compute the hiding probability of each element of latent vector according to its impact on the quality of the stego-image generated from the latent vector. Then, it hides more information in elements with higher hiding probability. The extensive experiments demonstrate that, compared to current GIS schemes, the proposed LVO-based GIS scheme generates higher-quality images, while maintaining hiding capacity (up to 5.0bpp ) and accurate information extraction (almost 100% accuracy rate).
Synthetic healthcare data utility with biometric pattern recognition using adversarial networks
This research examines the significance of privacy of synthetic data in healthcare and biomedicine by an analysis of actual data. The significance of authentic health care data necessitates the secure transmission of such data exclusively to authorized users. Therefore, to minimise the reliance on actual data, synthetic data is developed by incorporating diverse biometric pattern representations, necessitating a distinct setup with adversarial scenarios. Furthermore, to improve the quality of synthetic data, a deep convolutional adversarial network is examined under several operational modes. Furthermore, a distinct conditional metric is employed in this instance to avert the loss of synthetic data, so ensuring consistent transmissions. The system model is developed by examining numerous parameters associated with matching, classification losses, biometric privacy, information leakage, data relocations, and deformations, which are merged with a corresponding adversarial framework. To validate the results of the integrated system model, four scenarios and two case studies are examined, demonstrating that successful data creation can be achieved artificially with minimal losses of 5%.
Internet of Things (IoT) paves the way for the modern smart industrial applications and cities. Trusted Authority acts as a sole control in monitoring and maintaining the communications between the IoT devices and the infrastructure. The communication between the IoT devices happens from one trusted entity of an area to the other by way of generating security certificates. Establishing trust by way of generating security certificates for the IoT devices in a smart city application can be of high cost and expensive. In order to facilitate this, a secure group authentication scheme that creates trust amongst a group of IoT devices owned by several entities has been proposed. The majority of proposed authentication techniques are made for individual device authentication and are also utilized for group authentication; nevertheless, a unique solution for group authentication is the Dickson polynomial based secure group authentication scheme. The secret keys used in our proposed authentication technique are generated using the Dickson polynomial, which enables the group to authenticate without generating an excessive amount of network traffic overhead. IoT devices' group authentication has made use of the Dickson polynomial. Blockchain technology is employed to enable secure, efficient, and fast data transfer among the unique IoT devices of each group deployed at different places. Also, the proposed secure group authentication scheme developed based on Dickson polynomials is resistant to replay, man-in-the-middle, tampering, side channel and signature forgeries, impersonation, and ephemeral key secret leakage attacks. In order to accomplish this, we have implemented a hardware-based physically unclonable function. Implementation has been carried using python language and deployed and tested on Blockchain using Ethereum Goerli’s Testnet framework. Performance analysis has been carried out by choosing various benchmarks and found that the proposed framework outperforms its counterparts through various metrics. Different parameters are also utilized to assess the performance of the proposed blockchain framework and shows that it has better performance in terms of computation, communication, storage and latency.
Diagnostic structure of visual robotic inundated systems with fuzzy clustering membership correlation
The process of using robotic technology to examine underwater systems is still a difficult undertaking because the majority of automated activities lack network connectivity. Therefore, the suggested approach finds the main hole in undersea systems and fills it using robotic automation. In the predicted model, an analytical framework is created to operate the robot within predetermined areas while maximizing communication ranges. Additionally, a clustering algorithm with a fuzzy membership function is implemented, allowing the robots to advance in accordance with predefined clusters and arrive at their starting place within a predetermined amount of time. A cluster node is connected in each clustered region and provides the central control center with the necessary data. The weights are evenly distributed, and the designed robotic system is installed to prevent an uncontrolled operational state. Five different scenarios are used to test and validate the created model, and in each case, the proposed method is found to be superior to the current methodology in terms of range, energy, density, time periods, and total metrics of operation.
This paper proposes and executes an in-depth learning-based image processing approach for self-picking apples. The system includes a lightweight one-step detection network for fruit recognition. As well as computer vision to analyze the point class and anticipate a correct approach position for each fruit before grabbing. Using the raw inputs from a high-resolution camera, fruit recognition and instance segmentation are done on RGB photos. The computer vision classification and grasping systems are integrated and outcomes from tree-grown foods are provided as input information and output methodology poses for every apple and orange to robotic arm execution. Before RGB picture data is acquired from laboratory and plantation environments, the developed vision method will be evaluated. Robot harvest experiment is conducted in indoor as well as outdoor to evaluate the proposed harvesting system's performance. The research findings suggest that the proposed vision technique can control robotic harvesting effectively and precisely where the success rate of identification is increased above 95% in case of post prediction process with reattempts of less than 12%.
Robotic technology holds a significant role within the realm of smart industries, wherein all functionalities are executed within real-time systems. The verification of robot operations is a crucial aspect in the context of Industry 5.0. To address this requirement, a distinctive design methodology known as SL-RI is proposed. This article aims to establish the significance of incorporating robots in the Industry 5.0 framework through analytical representations. In the context of this industrial monitoring system, the implementation of a supplementary algorithm is essential for effective management, as it enables the robots to acquire knowledge through the analysis and adaptation of restructured commands. The analytical model of robots is designed to accurately monitor the precise position and accelerations of robots, resulting in full-scale representations with minimal error conditions. The uniqueness of the proposed method in robotic monitoring system is related to the application process that is directly applied in Industry 5.0 by using various parametric cases where active movement of robots are monitored with rotational matrix representations. In this type of representations the significance relies in the way to understand the full movement of robots across various machines and its data handling characteristics that provides low loss and error factors.
Correction: Rabie et al. A Proficient ZESO-DRKFC Model for Smart Grid SCADA Security. Electronics 2022, 11, 4144
In the original publication [...]
Detection and classification of epileptic seizures from the EEG signals have gained significant attention in recent decades. Among other signals, EEG signals are extensively used by medical experts for diagnosing purposes. So, most of the existing research works developed automated mechanisms for designing an EEG-based epileptic seizure detection system. Machine learning techniques are highly used for reduced time consumption, high accuracy, and optimal performance. Still, it limits by the issues of high complexity in algorithm design, increased error value, and reduced detection efficacy. Thus, the proposed work intends to develop an automated epileptic seizure detection system with an improved performance rate. Here, the Finite Linear Haar wavelet-based Filtering (FLHF) technique is used to filter the input signals and the relevant set of features are extracted from the normalized output with the help of Fractal Dimension (FD) analysis. Then, the Grasshopper Bio-Inspired Swarm Optimization (GBSO) technique is employed to select the optimal features by computing the best fitness value and the Temporal Activation Expansive Neural Network (TAENN) mechanism is used for classifying the EEG signals to determine whether normal or seizure affected. Numerous intelligence algorithms, such as preprocessing, optimization, and classification, are used in the literature to identify epileptic seizures based on EEG signals. The primary issues facing the majority of optimization approaches are reduced convergence rates and higher computational complexity. Furthermore, the problems with machine learning approaches include a significant method complexity, intricate mathematical calculations, and a decreased training speed. Therefore, the goal of the proposed work is to put into practice efficient algorithms for the recognition and categorization of epileptic seizures based on EEG signals. The combined effect of the proposed FLHF, FD, GBSO, and TAENN models might dramatically improve disease detection accuracy while decreasing complexity of system along with time consumption as compared to the prior techniques. By using the proposed methodology, the overall average epileptic seizure detection performance is increased to 99.6% with f-measure of 99% and G-mean of 98.9% values.
The most important and difficult challenge the digital society has recently faced is ensuring data privacy and security in cloud‐based Internet of Things (IoT) technologies. As a result, many researchers believe that the blockchain's Distributed Ledger Technology (DLT) is a good choice for various clever applications. Nevertheless, it encountered constraints and difficulties with elevated computing expenses, temporal demands, operational intricacy, and diminished security. Therefore, the proposed work aims to develop a Decentralized Identifiable Distributed Ledger Technology‐Blockchain (DIDLT‐BC) framework that is intelligent and effective, requiring the least amount of computing complexity to ensure cloud IoT system safety. In this case, the Rabin algorithm produces the digital signature needed to start the transaction. The public and private keys are then created to verify the transactions. The block is then built using the DIDLT model, which includes the block header information, hash code, timestamp, nonce message, and transaction list. The primary purpose of the Blockchain Consent Algorithm (BCA) is to find solutions for numerous unreliable nodes with varying hash values. The novel contribution of this work is to incorporate the operations of Rabin digital data signature generation, DIDLT‐based blockchain construction, and BCA algorithms for ensuring overall data security in IoT networks. With proper digital signature generation, key generation, blockchain construction and validation operations, secured data storage and retrieval are enabled in the cloud‐IoT systems. By using this integrated DIDLT‐BCA model, the security performance of the proposed system is greatly improved with 98% security, less execution time of up to 150 ms, and reduced mining time of up to 0.98 s.
In this paper, advanced features of 6G networks by examining security of consumer electronic products are discussed. With rapid growth of consumer electronic products the network features are updated thus providing fast response to end users but security of transmission remains a major concern. Hence a collaborative framework is formulated in proposed method that solves all uncertainties in consumer electronic products if it is recommended to provide operation using 6G networks. The major significance of proposed method is to identify all problems that occurs in consumer electronic products that operated with advanced technological networks such as 6G and advanced 6G communications. Hence foremost importance is provided to identify all problems by using advanced artificial intelligence algorithm where electronic products can be identified by using natural language processors to convert machine language to identifiable ones thereby expert solutions are achieved. Here, a unique AI model known as Deep Adaptive Neuro Convoluted Chameleon Classifier (DANC3) is used for data classification, which aids in the identification and categorization of consumer data acquired from 6G networks.In each case, the objective functions are optimized with maximized security of data transmission for every consumer electronic product in 6G is reduced below 1%.
In this paper, a design model for resource allocation is formulated beyond 5G networks for effective data allocations in each network nodes. In all networks, data is transmitted only after allocating all resources, and an unrestrained approach is established because the examination of resources is not carried out in the usual manner. However, if data transmission needs to occur, some essential resources can be added to the network. Moreover, these resources can be shared using a parallel optimization approach, as outlined in the projected model. Further the designed model is tested and verified with four case studies by using resource allocator toolbox with parallax where the resources for power and end users are limited within the ranges of 1.4% and 6%. Furthermore, in the other two case studies, which involve coefficient determination and blockage factors, the outcomes of the proposed approach fall within the marginal error constraint of approximately 31% and 87%, respectively.
The consumption of water constitutes the physical health of most of the living species and hence management of its purity and quality is extremely essential as contaminated water has to potential to create adverse health and environmental consequences. This creates the dire necessity to measure, control and monitor the quality of water. The primary contaminant present in water is Total Dissolved Solids (TDS), which is hard to filter out. There are various substances apart from mere solids such as potassium, sodium, chlorides, lead, nitrate, cadmium, arsenic and other pollutants. The proposed work aims to provide the automation of water quality estimation through Artificial Intelligence and uses Explainable Artificial Intelligence (XAI) for the explanation of the most significant parameters contributing towards the potability of water and the estimation of the impurities. XAI has the transparency and justifiability as a white-box model since the Machine Learning (ML) model is black-box and unable to describe the reasoning behind the ML classification. The proposed work uses various ML models such as Logistic Regression, Support Vector Machine (SVM), Gaussian Naive Bayes, Decision Tree (DT) and Random Forest (RF) to classify whether the water is drinkable. The various representations of XAI such as force plot, test patch, summary plot, dependency plot and decision plot generated in SHAPELY explainer explain the significant features, prediction score, feature importance and justification behind the water quality estimation. The RF classifier is selected for the explanation and yields optimum Accuracy and F1-Score of 0.9999, with Precision and Re-call of 0.9997 and 0.998 respectively. Thus, the work is an exploratory analysis of the estimation and management of water quality with indicators associated with their significance. This work is an emerging research at present with a vision of addressing the water quality for the future as well.
In contemporary real-time applications, diminutive devices are increasingly employing a greater portion of the spectrum to transmit data despite the relatively small size of said data. The demand for big data in cloud storage networks is on the rise, as cognitive networks can enable intelligent decision-making with minimal spectrum utilization. The introduction of cognitive networks has facilitated the provision of a novel channel that enables the allocation of low power resources while minimizing path loss. The proposed method involves the integration of three algorithms to examine the process of big data. Whenever big data applications are examined then distance measurement, decisions mechanism and learning techniques from past data is much importance thus algorithms are chosen according to the requirements of big data and cloud storage networks. Further the effect of integration process is examined with three case studies that considers low resource, path loss and weight functions where optimized outcome is achieved in all defined case studies as compared to existing approach.
Health services and telemedicine have proven to be an important area for information protection in research, especially with medical services and smart health care applications. In these systems, medical imaging protection are important not only for clinical diagnosis, but also to protect the very sensitive and confidential patient data. With progress in imaging technologies and biomedical processing algorithms, the amount of image data increases rapidly. However, securing this information while transferring through insecure channel is still a constant challenge. Existing encryption techniques often face limitations such as high computational complexity, insufficient security against advanced cryptographic attacks, poor reversal and pixel correlation. To overcome these challenges, the proposed approach provides an innovative hybrid encryption technique that integrates DNA cryptography with Elliptical Curve Cryptography (ECC). The DNA-based coding shows high randomness and equality while the ECC provides strong security and confidentiality. The DNA encoding and secure key generation are employed in the proposed technique to obtain the encrypted medical image. The combination of these techniques addresses the main boundaries of existing disadvantage by increasing both security and calculation efficiency, making it well suited for real time medical applications. The experimental analysis was carried out with various parameters like histogram analysis, correlation coefficient, Chi square, MSE, PSNR, entropy etc. The result analysis states that the proposed methodology outperforms the state-of-the-art existing methods with enhanced performance such as entropy of 7.9981, Correlation coefficient of 0.0019 and PSNR of 53.97. Also, the proposed methodology is tested for runtime analysis, memory analysis and security analysis.
Intelligent traffic congestion forecasting using BiLSTM and adaptive secretary bird optimizer for sustainable urban transportation
Traffic congestion forecasting is one of the major elements of the Intelligent Transportation Systems (ITS). Traffic congestion in urban road networks significantly influences sustainability by increasing air pollution levels. Efficient congestion management enables drivers to bypass heavily trafficked areas and reducing pollutant emissions. However, properly forecasting congestion spread remains challenging due to complex, dynamic, and non-linear nature of traffic patterns. The advent of Internet of Things (IoT) devices has introduced valuable datasets that can support the development of intelligent and sustainable transportation for modern cities. This work presents a Deep Learning (DL) approach of Reinforcement Learning (RL) based Bidirectional Long Short-Term Memory (BiLSTM) with Adaptive Secretary Bird Optimizer (ASBO) for traffic congestion prediction. The experimentation is evaluated on Traffic Prediction Dataset and achieved better Mean Square Error (MSE) and Mean Absolute Error (MAE) with results of 0.015 and 0.133 respectively. Compared to the existing algorithms like RL, Deep Q Learning (DQL), LSTM and BiLSTM, the RL – BiLSTM with ASBO outperformed with the parameters MSE, RMSE, R2, MAE and MAPE with 37%, 27.44%, 26%, 33.52% and 35.8% respectively. The better performance demonstrates that RL- BiLSTM with ASBO is well-suited to predict congestion patterns in road networks.
With the advent of Web 2.0 and popularization of online shopping applications, there has been a huge upsurge of user generated content in recent times. Leading companies and top brands are trying to exploit this data and analyze the market demands and reach of their products among consumers using opinion mining. Sentiment analysis is a hot topic of research in the e-commerce industry. This paper proposes such a novel sentence level sentiment analysis approach for mining online product reviews using natural language processing and deep learning techniques. The proposed model consists of various stages like web crawling and collecting product reviews, preprocessing, feature extraction, sentiment analysis and polarity classification. The input reviews are preprocessed using natural language processing techniques like tokenization, lemmatization, stop word removal, named entity recognition and part of speech tagging. Feature extraction is done using bidirectional gated recurrent unit shortly called as BiGRU feature extractor and the sentiments are classified into three polarities such as positive, negative and neutral using a hybrid recurrent neural network based long short-term memory classifier. The specific combination of techniques employed here and applying it to a new kind of online product review is making the proposed model to be novel. Performance evaluation metrics such as accuracy, precision, recall, F measure and AUC are calculated for the proposed model and compared with many existing techniques like deep convolutional neural network, multilayer perceptron, CapsuleNet and generative adversarial networks. The proposed model can be used in a variety of applications like market research, social network mining, recommendation systems, brand analysis, product quality management etc. and is found to generate promising results when compared to prevailing models.
Heterogeneous wireless networks (HWNs) present a challenge in selecting the optimal network for user devices due to the overlapping availability of multiple networks. In order to help users choose the best HWN connection, this research is trying to build a decision-making framework that takes user preferences and network performance characteristics into account. Using a multi-attribute decision-making (MADM) method that incorporates fuzzy logic and the Fuzzy Analytic Hierarchy Process (FAHP), our goal is to improve the decision-making process for network selection. The suggested system takes into account a number of network metrics, including latency, jitter, bandwidth, and cost, and uses user preferences to determine the relative importance of each to guarantee a tailored and adaptable recommendation. Our results demonstrate that the algorithm greatly enhances the efficiency of network selection and the level of user happiness, with UMTS being the best option for conversational services, WiMAX being the best for streaming, and LTE being the best for interactive services. Through the incorporation of user-centric decision-making into the network selection process, this research enhances adaptive wireless communication systems, leading to better user experience and network efficiency.
Personalized learning in hybrid education
The process of teaching and learning during the pandemic has been evolving globally, with many institutions transforming their approaches to enhance the teaching and learning experience. Despite the presence of improved frameworks due to the varied learning capabilities of students, it remains quite challenging to analyse individual characteristic features. Consequently, this research provides clear insights into the integration of the Personalised Learning Approach (PLA) to foster effective interaction with students. However, many existing methods suggest different techniques for evaluating learners in a hybrid mode, where obtaining clear data sets can be difficult. In the teaching and learning approach, if the defined data set from experts is clear, decisions regarding the learning characteristics of students can be made in a shorter period. In the proposed method the PLA framework categorizes learners into four engagement-based clusters using a three-dimensional sensor model and machine learning classifiers. A dual-controller mechanism (master-slave) dynamically adjusts communication intervals and optimizes video transmission, reducing latency and packet loss. The methodology is validated using MATLAB-based simulations with a dataset of 1,700–5,000 learners, analyzing throughput, delay, packet loss, and cost efficiency. The test results clearly demonstrate that the PLA outperforms the conventional method, not only with the parameters mentioned above but also in terms of cost-effectiveness using master and slave controllers.
CyberSentry: Enhancing SCADA security through advanced deep learning and optimization strategies
SCADA systems form the core of infrastructural facilities, including power grids, water treatment facilities, and industrial processes. Changing cyber threats present increasingly sophisticated attacks against which traditional security models inadequately protect SCADA systems. These traditional models usually have drawbacks in the way of inadequate feature selection, inefficiency in detecting most attacks, and suboptimal parameter tuning, which cause vulnerabilities and reduce resilience in systems. This paper presents CyberSentry, a new security framework designed to overcome limitations so as to provide robust protection for SCADA systems. These three modules makeup CyberSentry: the RMIG feature selection model, tri-fusion net for attack detection, and Parrot-Levy Blend Optimization (PLBO) for parameter tuning. The Recursive Multi-Correlation-based Information Gain (RMIG) feature selection model enhances accuracy in detection by optimizing the set of fatal features through recursive multi-correlation analysis by Information Gain prioritization. The Tri-Fusion Net combines anomaly detection, signature-based detection, and machine learning classifiers to enhance the detection versatility and robustness. The PLBO module ensures efficient and dynamic tuning for the parameters through undocumented Parrot and Levy optimization techniques. The proposed CyberSentry framework integrates, within a unified architecture, anomaly detection, signature-based detection, and machine learning classifiers to enhance the security of SCADA systems against diverse cyber threats. Features extracted in this manner are analyzed using machine learning classifiers that exploit their predictive capabilities for robust threat classification. The proposed approaches are fused within the Tri-Fusion Net to complement each other in areas where the separate methods lack certain strengths. This, therefore, ensures broad threat detection, as is validated by extensive testing with various datasets for the assurance of superiority in accuracy and reliability. Validated and tested against a wide variety of datasets, CyberSentry demonstrates an overall accuracy of 99.5 % and a loss of 0.32, proving that this method is both effective and reliable.
Transformer-less high gain DC–DC converter design and analysis for fuel cell vehicles
High-power converters with significant gains represent established configurations that hold appeal for applications in the industrial and commercial sectors, such as fuel cell electric vehicles (FCEV), energy backup systems, and automotive headlamps. Existing literature predominantly features topologies employing a single-duty ratio. However, this singular approach may not be dependable for operations with high-duty cycles, necessitating the incorporation of additional components to enhance voltage gain. To address this, the current study introduces the concept of time-sharing within the context of a high-gain non-isolated DC–DC converter. This innovative approach achieves substantially higher output voltage gains, approximately 13.33 times that of the input voltage. The analysis of the proposed converter is approached from various perspectives. Finally, it is examined within the MATLAB/Simulink environment, where the theoretical analysis is validated, and an efficiency of 97.4% is achieved.
The advancement in technology, with the "Internet of Things (IoT) is continuing a crucial task to accomplish distance medical care observation, where the effective and secure healthcare information retrieval is complex. However, the IoT systems have restricted resources hence it is complex to attain effective and secure healthcare information acquisition. The idea of smart healthcare has developed in diverse regions, where small-scale implementations of medical facilities are evaluated. In the IoT-aided medical devices, the security of the IoT systems and related information is highly essential on the other hand, the edge computing is a significant framework that rectifies their processing and computational issues. The edge computing is inexpensive, and it is a powerful framework to offer low latency information assistance by enhancing the computation and the transmission speed of the IoT systems in the medical sectors. The main intention of this work is to design a secure framework for Edge computing in IoT-enabled healthcare systems using heuristic-based authentication and "Named Data Networking (NDN)". There are three layers in the proposed model. In the first layer, many IoT devices are connected together, and using the cluster head formation, the patients are transmitting their data to the edge cloud layer. The edge cloud layer is responsible for storage and computing resources for rapidly caching and providing medical data. Hence, the patient layer is a new heuristic-based sanitization algorithm called Revised Position of Cat Swarm Optimization (RPCSO) with NDN for hiding the sensitive data that should not be leaked to unauthorized users. This authentication procedure is adopted as a multi-objective function key generation procedure considering constraints like hiding failure rate, information preservation rate, and degree of modification. Further, the data from the edge cloud layer is transferred to the user layer, where the optimal key generation with NDN-based restoration is adopted, thus achieving efficient and secure medical data retrieval. The framework is evaluated quantitatively on diverse healthcare datasets from University of California (UCI) and Kaggle repository and experimental analysis shows the superior performance of the proposed model in terms of latency and cost when compared to existing solutions. The proposed model performs the comparative analysis of the existing algorithms such as Cat Swarm Optimization (CSO), Osprey Optimization Algorithm (OOA), Mexican Axolotl Optimization (MAO), Single candidate optimizer (SCO). Similarly, the cryptography tasks like "Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), and Data sanitization and Restoration (DSR) are applied and compared with the RPCSO in the proposed work. The results of the proposed model is compared on the basis of the best, worst, mean, median and standard deviation. The proposed RPCSO outperforms all other models with values of 0.018069361, 0.50564046, 0.112643119, 0.018069361, 0.156968355 and 0.283597992, 0.467442652, 0.32920734, 0.328581887, 0.063687386 for both dataset 1 and dataset 2 respectively.
This research presents an analysis of smart grid units to enhance connected units’ security during data transmissions. The major advantage of the proposed method is that the system model encompasses multiple aspects such as network flow monitoring, data expansion, control association, throughput, and losses. In addition, all the above-mentioned aspects are carried out with neural networks and adaptive optimizations to enhance the operation of smart grid networks. Moreover, the quantitative analysis of the optimization algorithm is discussed concerning two case studies, thereby achieving early convergence at reduced complexities. The suggested method ensures that each communication unit has its own distinct channels, maximizing the possibility of accurate measurements. This results in the provision of only the original data values, hence enhancing security. Both power and line values are individually observed to establish control in smart grid-connected channels, even in the presence of adaptive settings. A comparison analysis is conducted to showcase the results, using simulation studies involving four scenarios and two case studies. The proposed method exhibits reduced complexity, resulting in a throughput gain of over 90%.
Quantum secure patient login credential system using blockchain for electronic health record sharing framework
Nowadays, most of the medical records are maintained in a digital format known as Electronic Health Records Sharing (EHRS) framework. Patients have individual login credentials for accessing these medical records. In the BCT, the information about the owner of the block and its dependency over other blocks is maintained in itself. Moreover, each block is linked with its nearby blocks, leading to a network controlled by patients responsible for storing and sharing the information. In healthcare, BCT can help with mobile health apps, monitoring equipment, sharing and keeping of clinical trial data, electronic medical records, and insurance information storage. This study proposes a secure Patient Login Credential System (PLCS) for EHRS. The proposed scheme has been included for block encryption with the symmetric and asymmetric cryptography algorithms with respect to the hospital server and patients. Additionally, the Quantum Secure Trust Protocol (QSTP) is integrated to enhance trust and security between the patient-side and hospital-side, maintaining data integrity and confidentiality. Similarly, the Tune Swarm Optimization (TSO) algorithm is utilized to optimize performance metrics. The security analysis for the proposed scheme has been evaluated with basic security assumptions for information systems like, availability, access control, maintaining forward secrecy, and maintaining data integrity. The proposed scheme demonstrated enhanced security and performance, with IDEA achieving encryption in 58 ms and decryption in 278 ms for a 512-bit block, offering the best performance in terms of encryption speed.
Linear regressive weighted Gaussian kernel liquid neural network for brain tumor disease prediction using time series data
A brain tumor is an abnormal growth of cells within the brain or surrounding tissues, which can be either benign or malignant. Brain tumors develop in various regions of the brain, each affecting different functions such as movement, speech, and vision, depending on their location. Early prediction of brain tumors is crucial for improving survival rates and treatment outcomes. Advanced techniques, including medical imaging and machine learning, are widely used for early diagnosis. However, conventional machine learning and deep learning detection models face challenges in achieving high accuracy in brain tumor disease prediction while minimizing time complexity. To address this, a novel Linear Regressive Weighted Gaussian Kernel Liquid Neural Network (LRWGKLNN) model is developed. The proposed LRWGKLNN model comprises four major steps, namely data acquisition, preprocessing, feature selection, and classification. In the initial step, a large volume of time-series data samples is collected from a comprehensive dataset. Following data collection, preprocessing is performed, involving two key processes: handling missing data and outlier detection. First, the proposed LRWGKLNN model handles missing values using a linear regression method. After the imputation process, outlier data is identified and removed using the Generalized Extreme Studentized Deviation test. Once preprocessing is complete, the Cosine Congruence Weighted Majority Algorithm is employed to select significant features from the dataset while removing irrelevant features. This step helps minimize the brain tumor disease prediction time. Finally, the classification process is performed using the selected significant features with the Gaussian Kernelized Liquid Neural Network. This approach enhances the accuracy of brain tumor disease prediction using time-series data samples. The experimental evaluation is carried out using various performance metrics such as accuracy, precision, recall, F1 score, and disease prediction time with respect to the number of time-series data samples. The obtained results demonstrate that the proposed LRWGKLNN model achieves higher 4%, 4% 5%, 4% and 4% accuracy, precision, recall, specificity and F1 score in brain tumor disease prediction. Furthermore, the LRWGKLNN model realizes a substantial reduction in time consumption with feature selection by 16% compared to existing deep learning methods.
Diagnostic behavior analysis of profuse data intrusions in cyber physical systems using adversarial learning techniques
In this paper we propose Cyber Physical Systems (CPS) framework to mitigate intrusions in the existing dataset by constructing a distinctive system model with an analytical framework. With the exponential growth of data network topologies, the prevalence of CPS facing various sorts of invasions is evident across all data management strategies. Therefore, it is imperative to eradicate any data associated with invasions, as it may inflict significant harm on other users. The analytical framework for CPS is designed to distinguish between true and false data samples and to assess the failure rate of each data sample set. The primary contribution of the created system model, which incorporates a learning technique, is to reduce data loss, hence eliminating all incursions under conditions of minimal loss through the use of generators and discriminators. Furthermore, the integrated framework is evaluated in real-time, and simulations are conducted, demonstrating that the simulated results are significantly more effective in reducing failure rates, data losses, and state count durations. The simulated outcomes are also contrasted with existing methodologies that do not incorporate learning methods. The comparative simulated results for the suggested method indicate an only 1% data loss, allowing for implementation in real-time situations without data integrity issues, achieving an average of 97% efficacy.
Enhancing Energy Efficiency via Artificial Intelligence
This paper proposes the development of a series of phototransistors capable of converting light energy into electrical energy, which can be stored and utilized for various purposes. In regions where solar devices are ineffective due to absence of sunlight or during power cuts, this alternative energy source can be employed effectively. By harnessing ambient light, these phototransistors offer a sustainable solution for indoor environments such as pubs and indoor stadiums where traditional power sources may be unreliable. The project involves thorough background study including research on phototransistor technology, energy storage methods, and potential applications in different settings. Implementation of this technology could significantly contribute to reducing dependency on conventional power sources and promoting energy efficiency in various indoor spaces.
This article examines the impact of utilizing generative artificial intelligence optimizations in automating the content generation process. This instance involves the identification of fraudulent content, which is often characterized by dynamic patterns, in addition to content production. The generated contents are constrained, which limits their dimensionality. In this scenario, duplicated contents are eliminated from the automatic creations. Furthermore, the generated ratios are utilized to discover current patterns with minimized losses and errors, hence enhancing the accuracy of generative contents. Furthermore, while analysing the created patterns, we detect a significant discrepancy in lead durations, resulting in the generation of high scores for relevant information. In order to test the results using generative tools, the adversarial network codes are employed in four scenarios. These scenarios involve generating large patterns and reducing the dynamic patterns with an enhanced accuracy of 97% in the projected model. This is in contrast to the existing approach, which only provides a content accuracy of 77% after detecting fraud.
In many emerging nations, rapid industrialization and urbanization have led to heightened levels of air pollution. This sudden rise in air pollution, which affects global sustainability and human health, has become a significant concern for citizens and governments. While most current methods for predicting air quality rely on shallow models and often yield unsatisfactory results, our study explores a deep architectural model for forecasting air quality. We employ a sophisticated deep learning structure to develop an advanced system for ambient air quality prediction. We utilize three publicly available databases and real-world data to obtain accurate air quality measurements. These four datasets undergo a data cleaning to yield a consolidated, cleaned dataset. Subsequently, the Fused Eurasian Oystercatcher-Pathfinder Algorithm (FEO-PFA)—a dual optimization method combining the Eurasian Oystercatcher Optimizer (EOO) and Pathfinder Algorithm (PFA)—is applied. This method aids in selecting weighted features, optimizing weights, and choosing the most relevant attributes for optimal results. These optimal features are then incorporated into the Multiscale Depth-wise Separable Adaptive Temporal Convolutional Network (MDS-ATCN) for the ambient Air Quality Prediction (AQP) process. The variables within MDS-ATCN are further refined using the proposed FEO-PFA to enhance predictive accuracy. An empirical analysis is performed to compare the efficacy of our proposed model with traditional methods, underscoring the superior effectiveness of our approach. The average cost function is reduced to 5.5%, the MAE to 28%, and the RMSE to 14% by the suggested method, according to the performance research conducted with regard to all datasets.
Experimental analysis of enhanced finite set model predictive control and direct torque control in SRM drives for torque ripple reduction
The magnet-less switched reluctance motor (SRM) speed-torque characteristics are ideally suited for traction motor drive characteristics and its advantage to minimize the overall cost of on-road EVs. The main drawbacks are torque and flux ripple, which have produced high in low-speed operation. However, the emerging direct torque control (DTC) operated magnitude flux and torque estimation with voltage vectors (VVs) gives high torque ripples due to the selection of effective switching states and sector partition accuracy. On the other hand, the existing model predictive control (MPC) with multiple objective and optimization weighting factors produces high torque ripples due to the system dynamics and constraints. Therefore, existing DTC and MPC can result in high torque ripples. This paper proposed a finite set (FS)-MPC with a single cost function objective without weighting factor: the predicted torque considered to evaluate VVs to minimize the ripples further. The selected optimal VV minimizes the SRM drive torque and flux ripples in steady and dynamic state behaviour. The classical DTC and proposed model were developed, and simulation results were verified using MATLAB/Simulink. The proposed model operated in SRM drives experimental results to prove the effective minimization of torque and flux ripples compared to the existing DTC.
In the air-to-ground transmissions, the lifespan of the network is based on the "unmanned aerial vehicle's (UAV)" life span because of the limited battery capacity. Thus, the enhancement of energy efficiency and the outage of the ground candidate's minimization are significant factors of the network functionality. UAV-aided transmission can highly enhance the spectrum efficacy and coverage. Because of their flexible deployment and the high maneuverability, the UAVs can be the best alternative for the situations where the "Internet of Things (IoT)" systems utilize more energy to attain the essential information rate, when they are far away from the terrestrial base station. Therefore, it is significant to win over the few troubles in the conventional UAV-aided efficiency approaches. Thus, this proposed work is aimed to design an innovative energy efficiency framework in the UAV-assisted network using a reinforcement learning mechanism. The energy efficiency optimization in the UAV offers better wireless coverage to the static and mobile ground user. Presently, reinforcement learning techniques effectively optimize the energy efficiency rate of the system by employing the 2D trajectory mechanism, which effectively removes the interference rate attained in the nearby UAV cells. The main objective of the recommended framework is to maximize the energy efficiency rate of the UAV network by performing the joint optimization using UAV 3D trajectory, with the energy utilized during interference accounting, and connected user counts. Hence, an efficient Adaptive Deep Reinforcement Learning with Novel Loss Function (ADRL-NLF) framework is designed to provide a better energy efficiency rate to the UAV network. Moreover, the parameter of ADRL is tuned using the Hybrid Energy Valley and Hermit Crab (HEVHC) algorithm. Various experimental observations are performed to observe the effectualness rate of the recommended energy efficiency model for UAV-based networks over the classical energy efficiency framework in UAV Networks.
Cone-structured seven-level boost inverter topology for improvising power quality using online monitoring controller scheme for DSTATCOM application
In this article, a seven-level triple-time voltage-boosting topology (8S7L-TTB) is proposed, which has eight switches as the minimum. This topology is used as DSTATCOM to eliminate power-quality issues. In a three-phase four-wire distribution system, specific unpredictable issues have emerged through the increase of unbalanced linear and non-linear loads, which has caused various drawbacks such as voltage imbalance, poor voltage regulation, increased reactive components, and harmonics generation, through which the life period of the system has been minimized, undesirable heating has been produced, and the RMS voltage has been reduced. Hence, it is highly essential to eliminate the issues as mentioned earlier. In this research, a PV-based DSTATCOM is introduced for online-monitoring adaptive Chebyshev neural network controller with triple-boost inverter topology. The three-phase system’s D-Q components, which have been continuously extracted under instant-loading conditions, and the reference magnitude, have been compared through the proposed controller. The error signal is received from the adaptive Chebyshev neural network controller, through which the proposed inverter-switching devices are triggered with multi-carrier pulse width-modulation technique are used to compute the THD of 2.79%. Furthermore, this proposed seven-level inverter and controller have solved the aforementioned problems and maintained the floating capacitor’s voltage nearly as similar as the source voltage in different loading conditions to ensure the efficiency of the system is at 96.82%. To ensure its suitability in real-time, the proposed controller is simulated with MATLAB software and validated through the downscale experimental setup and the results are observed.
With technology development, the growing self-communicating devices in IoT networks require specific naming and identification, mainly provided by IPv6 addresses. The IPv6 address in the IoT network is generated by using the stateless auto address configuration (SLAAC) mechanism, and its uniqueness is ensured by the DAD protocol. Recent research suggests that IPv6 deployment can be a risky decision due to the existing SLAAC-based addressing scheme and the DAD protocol being prone to reconnaissance and denial of service (DoS) attacks. This research paper proposes a new IPv6 generation scheme with an improved secure DAD mechanism to address these problems. The proposed addressing scheme generates IPv6 addresses by taking a hybrid approach based on vendor id of medium access control (MAC) address, physical location, and arbitrary random numbers, which mitigates reconnaissance attacks by malicious nodes. To prevent the DAD process from DoS attacks, hybrid values of interface identifier (IID) are multicast instead of actual values. The proposed scheme is evaluated under reconnaissance and DoS attacks in the presence of malicious nodes. The evaluation results demonstrate that the proposed method effectively mitigates reconnaissance and DoS attacks, outperforming the EUI-64 and SEUI-64 schemes in terms of address success rate (ASR), energy consumption, and communication overhead. Specifically, the proposed method significantly reduces the average probing rate for scanning the existence of an IPv6 address, with only a 1% probing rate compared to SEUI-64’s 5% and EUI-64’s 100%. Furthermore, the additional communication overhead introduced by the proposed method is less than 13% and 11% compared to EUI-64 and SEUI-64, respectively. Additionally, the energy consumption required to assign an IPv6 address using the proposed method is lower by 12% and 5% when compared to EUI-64 and SEUI-64, respectively. These findings highlight the effectiveness of the proposed method in enhancing security and optimizing resource utilization in IPv6 addressing.
Artificial Neural Networks for Data Processing: A Case Study of Image Classification
An Artificial Neural Network (ANN) is a data processing paradigm inspired by the way organic nervous systems, such as the brain, process data. The innovative structure of the information processing system is a crucial component of this paradigm. It is made up of a huge number of highly linked processing components (neurons) that work together to solve issues. Neural networks handle data in the same manner that the human brain does. The network is made up of several densely linked processing units (neurons) that operate in parallel to solve a given problem. They are unable to be programmed to execute a specific activity. ANN, like humans, learns by example. Through a learning process, an ANN is trained for a specific application, such as pattern recognition or data categorization. In biological systems, learning includes changes to the synaptic connections that occur between neurons. This is also true for ANNs. Artificial Neural Networks are used for classification, regression, and grouping. Stages of image processing are classified as preprocessing, feature extraction, and classification. It can be utilized later in the process. ANN should be provided with features and output should be classified. This paper provides an overview of Artificial Neural Networks (ANN), their operation, and training. It also explains the application and its benefits. Artificial Neural Network has been used to classify the MNIST dataset.
In this paper, the need for a quantum computing approach is analyzed for IoT applications using the 5G resource spectrum. Most of the IoT devices are connected for data transmission to end users with remote monitoring units, but there are no sufficient data storage units, and more data cannot be processed at minimized time periods. Hence, in the proposed method, quantum information processing protocols and quantum algorithms are integrated where data transmissions are maximized. Further, the system model is designed in such a way for checking the external influence factors that prevent the IoT device from transmitting data to end users. Therefore, with corresponding signal and noise power, it is essential to process the transmissions, thereby increasing data proportions at end connectivity. Once quantum computations are performed, then it is crucial to normalize IoT data units, thus establishing control over entire connected nodes that create a gateway for achieving maximum throughput. The combined system model is tested under four cases where the comparative outcomes prove that with reduced queue reductions of 12%, it is possible to achieve a maximum throughput of 99%.
IoT driven healthcare monitoring with evolutionary optimization and game theory
In this paper the game theory procedures are applied for healthcare monitoring systems and it is analysed using two types of evolutionary algorithms that incorporate Artificial Intelligence (AI) based events. As most of the existing approaches face challenges in establishing real-time connectivity, optimizing decision-making processes, and minimizing latency in Internet of Things (IoT)-based healthcare applications the limitations needs to be addressed. Hence with analytical equivalences that are crucial in game theory, a unique system model is developed using a deterministic framework where four key performers are strategically connected to improve decision-making and security against potential data breaches. By incorporating two evolutionary algorithms, the proposed approach optimizes the state of action for each participant while reducing energy consumption and processing delay. The model is validated through four case studies, demonstrating an average improvement of 60% over existing methodologies. These findings highlight the effectiveness of integrating game theory with evolutionary optimization to enhance real-time healthcare monitoring.
QoS Transformation in the Cloud: Advancing Service Quality Through Innovative Resource Scheduling
ABSTRACT
Cloud computing (CC) has emerged as a transformative technology, offering customers unprecedented access to extensive computing resources and the diverse services for hosting various applications. However, this environment comes with several challenges. While cloud users seek optimal resources to cater to their specific requirements, the prevalent scenario often involves trading more monetary resources for less computational time. Existing algorithms, mostly focused on optimizing individual variables, lack a holistic approach. Addressing these issues necessitates a new approach to combine these conflicting objectives. This research focuses on developing and improving a dynamic task‐processing framework that can find and use the optimal resources in real‐time. The focus extends to running applications of different types and levels of complexity on virtual machines (VMs) using the multi‐objective adaptive particle swarm optimization (MAPSO) algorithm. The MAPSO handles the multi‐objective problem using the weighted‐sum approach. The system operates within predefined constraints to meet users' specific time limitations. Through comprehensive simulations on a wide range of datasets, the proposed methodology yields a set of non‐dominated optimal solutions. This outcome is instrumental in improving critical quality of service (QoS) metrics, including processing time, execution costs, throughput, and task rejection ratios. The effectiveness of the MAPSO‐based approach are evident in its capacity to improve these numerous QoS aspects, including processing time, execution cost, throughput, and task rejection ratio compared and clearly shows that it is superior to the existing algorithms, such as ant colony optimization (ACO), hybrid version of bat optimization algorithm and particle swarm optimization (BOA+PSO), and hybrid grey wolf optimization and artificial bee colony (GWO+ABC). The time complexity for completing the tasks of the MAPSO algorithm is reduced by 5%, executes each schedule's tasks faster by 5% to 13%, and calculated execution costs also get reduced when compared to ACO, BOA+PSO, and GWO+ABC. Moreover, the suggested methodology convincingly outperforms existing state‐of‐the‐art methods in terms of computational performance. This study pioneers a unique solution in cloud service provisioning by integrating multi‐objective optimization within a real‐time resource allocation framework. The resulting combination of intelligent resource allocation and enhanced QoS metrics promises to change the way cloud‐based application deployment is done. Ultimately, this work establishes a paradigm shift in balancing resource allocation and user‐centric QoS optimization in cloud computing environments.
Enhancing lung cancer detection through integrated deep learning and transformer models
Lung cancer has been stated as one of the prevalent killers of cancer up to this present time and this clearly underlines the rationale for early diagnosis to enhance life expectancy of patients afflicted with the condition. The reasons behind the usage of the transformer and deep learning classifiers for the detection of lung cancer include accuracy, robustness along with the capability to handle and evaluate large data sets and much more. Such models can be more complex and can help to utilize multiple modalities of data to give extensive information that will be critical in ascertaining the right diagnosis at the right time. However, the existing works encounter several limitations including reliance on large annotated data, overfitting, high computation complexity, and interpretability. Third, the issue of the stability of these models’ performance when applied to actual clinical datasets is still an open question; this is an even bigger issue that will greatly reduce the actual utilization of these models in clinical practice. To tackle these, we develop a novel Cancer Nexus Synergy (CanNS), which applies of A. Swin-Transformer UNet (SwiNet) Model for segmentation, Xception-LSTM GAN (XLG) CancerNet for classification, and Devilish Levy Optimization (DevLO) for fine-tuning parameters. This paper breaks new ground in that the presented elements are incorporated in a manner that co-operatively elevates the diagnostic capabilities while at the same time being computationally light and resilient. These are SwiNet for segmented analysis, XLG CancerNet for precise classification of the cases, and DevLO that optimizes the parameters of the lung cancer detection system, making the system more sensible and efficient. The performance outcomes indicate that the CanNS framework enhances the detection’s accuracy, sensitivity, and specificity compared to the previous approaches.
Ascertaining sustainability for affordable energy generation with non-renewable sources using computational intelligence algorithm
In India an enhanced target that is related to affordable energy for increasing the sustainability among various classes of people must be achieved in subsequent years. As the growing population needs sustainable electricity in an affordable way it is essential to reduce the increasing demand and to increase possible generations in a regular way. Hence in the proposed method importance of non-renewable sources are analyzed by using computational intelligence algorithm where the likelihood of energy availability is observed. In order to discover sustainability clusters are considered in accordance with different regions thus providing connectivity at various points at reduced radiations. Therefore with alternate use of non-renewable sources it is possible to reduce the effect of fossil fuels and in the comparison state for such reductions are carried out with four scenarios. In the comparison it is observed that using computational intelligence technique it is possible to observe current demands and it can be reduced with possible generations for more than 80% thus achieving affordable energy at reduced cost.
This study examines the importance of enterprise information systems that link several corporate organisations to share information about diverse products under high security settings. The primary goal of the proposed strategy is to create a direct link between product demand and production to minimise the impact of rising costs. The research motive to make a connection cannot be resolved without suitable data that shows both quantity and quality in each organisation unit. The suggested method is designed to deliver accurate data to authorised end users while preventing any data exposure to unauthorised users. Security cryptographic keys are utilised to create a data control method, and the blowfish algorithm is integrated with the projected system model to segregate data blocks for enterprise systems. Four scenarios are considered where the results show that by using the integrated model, it is feasible to increase the number of authorisation units to 88%, compared to the 75% attained with the current approach.
Application of Improved Support Vector Machine for Pulmonary Syndrome Exposure with Computer Vision Measures
Background: In many medically developed applications, the process of early diagnosis in cases of pulmonary disease does not exist. Many people experience immediate suffering due to the lack of early diagnosis, even after becoming aware of breathing difficulties in daily life. Because of this, identifying such hazardous diseases is crucial, and the suggested solution combines computer vision and communication processing techniques. As computing technology advances, a more sophisticated mechanism is required for decision-making. Objective: The major objective of the proposed method is to use image processing to demonstrate computer vision-based experimentation for identifying lung illness. In order to characterize all the uncertainties that are present in nodule segments, an improved support vector machine is also integrated into the decision-making process. Methods: As a result, the suggested method incorporates an Improved Support Vector Machine (ISVM) with a clear correlation between various margins. Additionally, an image processing technique is introduced where all impacted sites are marked at high intensity to detect the presence of pulmonary syndrome. Contrary to other methods, the suggested method divides the image processing methodology into groups, making the loop generation process much simpler. Results: Five situations are taken into account to demonstrate the effectiveness of the suggested technique, and test results are compared with those from existing models. Conclusion: The proposed technique with ISVM produces 83 percent of successful results.
In modern healthcare, integrating Artificial Intelligence (AI) and Internet of Medical Things (IoMT) is highly beneficial and has made it possible to effectively control disease using networks of interconnected sensors worn by individuals. The purpose of this work is to develop an AI-IoMT framework for identifying several of chronic diseases form the patients’ medical record. For that, the Deep Auto-Optimized Collaborative Learning (DACL) Model, a brand-new AI-IoMT framework, has been developed for rapid diagnosis of chronic diseases like heart disease, diabetes, and stroke. Then, a Deep Auto-Encoder Model (DAEM) is used in the proposed framework to formulate the imputed and preprocessed data by determining the fields of characteristics or information that are lacking. To speed up classification training and testing, the Golden Flower Search (GFS) approach is then utilized to choose the best features from the imputed data. In addition, the cutting-edge Collaborative Bias Integrated GAN (ColBGaN) model has been created for precisely recognizing and classifying the types of chronic diseases from the medical records of patients. The loss function is optimally estimated during classification using the Water Drop Optimization (WDO) technique, reducing the classifier’s error rate. Using some of the well-known benchmarking datasets and performance measures, the proposed DACL’s effectiveness and efficiency in identifying diseases is evaluated and compared.
Containerization
Applications are developed and deployed on the specific stack of software, as virtualization creates an environment to run the designed stack of software. Virtualization requires the installation of the entire software to run the application. The environment setup time will take more than, the execution time. Memory and CPU time are not effectively utilized. The containers are replacing the drawback of installing the unnecessary services of the software, not requiring running applications. The container consists of only the required services software loaded in the container. Container security is also an important aspect of utilizing applications with portability features. Different tools are designed for different aspects of security issues.
Collective Diagnostic Prototypical in Internet of Medical Things for Depression Identification Using Deep Learning Algorithm
Background: The majority of wearable technology that is present in various patents for Internet of Medical Things (IoMT) health monitoring systems is introduced to recognize various bodily indicators. The enumerated patents indicate that monitored values are sent to a central server, where they are all treated by experts at the appropriate moment. Therefore, a new patent technique by expanding the use of wireless devices, has been discovered that such communication technologies can recognize specific depression traits and mood swings. Objectives: The major objective of the proposed method is to analyze the disputes that arise in the characteristics of an individual by observing the leveling periods that are identified from the processed image. In addition, the rate of data transfer in case of any dispute is maximized therefore recognition problem is solved at a minimized distance. Further, the steady state probability values are achieved at low delay thus minimizing the dropout packets in the monitored system using IoMT and LSTM. Methods: A balanced record with four distinct parameters—such as livelihood, self-reliance, correlation, and precision—is employed with the projected model on IoMT for depression identification. As a result, high data transfer rates and low distance separation are used to process the identification framework. Additionally, by combining an original matrix representation with the input feature set using LSTM, a novel framework with great efficiency is created. Results: In order to assess the results of IoMT using LSTM, four situations are split apart and their probability ratios are calculated. The results of each situation are then contrasted with the current methodology, and it is found that when there is a low dropout ratio, depression in a person is quickly diagnosed. Conclusion: The comparison analysis demonstrates that the proposed method, when compared to the current method, offers the best-compromised outcomes at roughly 64%.
Internet of things and cybersecurity mechanism for industrial automation systems
Cyberattack on our country's infrastructure could be easily prevented if we had access to our personal information or the information of our own business network communities. IT professionals and networked businesses face an infinite number of cyberattacks. Machine-to-machine (M2M) communication, used to link and transport data from one computer to the other, is a common use of the Internet of Things (IoT). However, cybersecurity threats are also being communicated so an attacker can access and follow the data. Manufacturing processes and efficiency can be improved by implementing the Industrial Internet of Things (IIoT) idea. Existing hierarchical models must be converted to a fully connected vertical model to accomplish this. IIoT is a novel method, and as such, the ecosystem is vulnerable to cyberthreat vectors and challenges with standardization and interoperability. New communication models and technologies are required to accomplish the needed levels of data security in the IIoT M2M. These include 5G, TSN ethernet, selfdriving networks, etc. Malicious actors may take advantage of system flaws caused by the faulty implementation of security standards if no measures are in place to assess the risks and vulnerabilities. A cybersecurity project for Industry 4.0 is presently underway, and the findings in this report are based on that work. Converged/hybrid cybersecurity standards are explained in this research, and best practices are reviewed. A roadmap for identifying, aligning, and implementing the correct cybersecurity standards and tactics for protecting M2M communications in the IIoT is also provided.
An Appraisal over Intrusion Detection Systems in Cloud Computing Security Attacks
Cloud computing provides so many groundbreaking advantages over native computing servers like to improve capacity and decrease costs, but meanwhile, it carries many security issues also. In this paper, we find the feasible security attacks made about cloud computing, including Wrapping, Browser Malware-Injection and Flooding attacks, and also problems caused by accountability checking. We have also analyzed the honey pot attack and its procedural intrusion way into the system. This paper on overall deals with the most common security breaches in cloud computing and finally honey pot, in particular, to analyze its intrusion way. Our major scope is to do overall security, analyze in the cloud and then to take up with a particular attack to deal with granular level. Honey pot is the one such attack that is taken into account and its intrusion policies are analyzed. The specific honey pot algorithm is in the queue as the extension of this project in the future.
A novel IDS technique to detect DDoS and sniffers in smart grid
Smart grid doesn't have a single standard definition to define it. Commonly, Smart Grid is an incorporation of advanced technologies over the normal electrical grid. Smart grid provides some novel features that mainly includes two way communication and automatic self-healing capability. Like the Internet, the Smart Grid consists of many new technologies and equipment that are bind together. These technologies works with the electrical grid to respond digitally accordingly to our quickly changing electric demand. Even though it is stuffed with pros, it suffers a lot due to its fragile data security. Smart grid usually have a centralized control system called SCADA to monitor and maintain all the data sources. Attackers would always tend to sneak through this centralized system through numerous types of attacks. Since SCADA system has no definite protocol, it can be fixed into any kind of protocol that is required by the utility. In this paper, the proposed method provides two techniques one to detect and remove sniffers from the network. Another one is to safeguard the SCADA system from the DDoS attack. Promiscuous mode detection and MD-5 algorithm is used to find the sniffers and by analyzing the TTL values, DDoS attack is been identified and isolated. The proposed technique is also compared with a real time existing IDS tool to show its better bandwidth consumption.
Prediction of COVID-19 Wide Spread in India using Time Series Forecasting Techniques
Abstract
The assets of some of the enormous wealth are strain out due to the massive infectivity of COVID-19. India is a portion of the global wide spread of COVID- 19 engender by dreadful drastic respiratory syndrome corona virus 2. As of 15th July 2020, the Ministry of Health and Family Welfare has committed a total of 968857 instances, 612768 healings and 24914 demises in the country. Due to the heighten magnitude of number of instances, a professional working in the health departments, some forecasting methods would be required to forecast the number of instances in subsequent days. Due to a towering of uncertainty and lack of crucial information, quality models have shown stubby accuracy for long-term forecast. Among several machine learning models investigated, Time Series Forecasting like Facebook’s Prophet showed promising results. In this paper, we have predicted the number of committed, healed, demise instances of COVID-19 in India 60 days’ forwards, forecasted the Number of Committed instances, healed instances and demise instances of COVID-19 in India 30 days onwards. Relied on the consequences announced here, and due to the mostly composite nature of the COVID-19 eruption and variation in its deportment, this study suggests machine learning as an efficacious contrivance prototype to the eruption.
Resistance–capacitance optimizer: a physics-inspired population-based algorithm for numerical and industrial engineering computation problems
The primary objective of this study is to delve into the application and validation of the Resistance Capacitance Optimization Algorithm (RCOA)—a new, physics-inspired metaheuristic optimization algorithm. The RCOA, intriguingly inspired by the time response of a resistance–capacitance circuit to a sudden voltage fluctuation, has been earmarked for solving complex numerical and engineering design optimization problems. Uniquely, the RCOA operates without any control/tunable parameters. In the first phase of this study, we evaluated the RCOA's credibility and functionality by deploying it on a set of 23 benchmark test functions. This was followed by thoroughly examining its application in eight distinct constrained engineering design optimization scenarios. This methodical approach was undertaken to dissect and understand the algorithm's exploration and exploitation phases, leveraging standard benchmark functions as the yardstick. The principal findings underline the significant effectiveness of the RCOA, especially when contrasted against various state-of-the-art algorithms in the field. Beyond its apparent superiority, the RCOA was put through rigorous statistical non-parametric testing, further endorsing its reliability as an innovative tool for handling complex engineering design problems. The conclusion of this research underscores the RCOA's strong performance in terms of reliability and precision, particularly in tackling constrained engineering design optimization challenges. This statement, derived from the systematic study, strengthens RCOA's position as a potentially transformative tool in the mathematical optimization landscape. It also paves the way for further exploration and adaptation of physics-inspired algorithms in the broader realm of optimization problems.
Optimization of IoT circuit for flexible optical network system with high speed utilization
The term flexible optical network (FON) for high-speed utility with Internet of Things (IoT) assistance refers to a kind of network infrastructure that combines the advantages of FON with IoT technology to make it possible to provide high-speed and effective utility services. IoT applications are a feature of the modern era. In this study, we offered the notion that an optical network is employed to create high-speed IoT assistance. There are many other access network types accessible, but FON is used in this case which has greater efficiency and lower cost than Active Optical Network owing to the easy setup of components, making it highly popular in today’s society. Here, FON technologies are explained, and several ways of showing how they relate to the IoT are provided. Use case and requirements of IoT as well as viable solutions for high-speed utility and FON with Performance analysis of FON covers various aspects, such as average broadband speed (2022: 88 Mbps), IoT data access rate (LTE: 20 kb/s, WDM: 39 kb/s, VDSL2: 32 kb/s), FON factors (2022: Optical network equipment: 51, Optical line terminal: 64, Security: 76, Network management: 89), FON types with transmission speed (GFON: 32 Mbps, XGFON: 52 Mbps, TWDM-FON: 67 Mbps, SMA: 73 Mbps, HSMA: 92 Mbps), and energy consumption (Reach: 20, Data rate: 18, Power Rate: 10, Cost: 30) are used to increase the high efficiency of FON. If an IoT over FON architecture network of different nodes is employed, power consumption may be reduced while still using all available capabilities. We also go through the latest developments in optical devices, optical switching, and Optical Network (ON) technologies related to high-speed networks. Finally, we wrap up the study by discussing how these technologies have improved network intelligence and allowed deterministic content delivery across FON’s high-speed capabilities.
Integration of cloud with AI to predict crop diseases
In the meantime, technology is reaching every domain. In the software industry, Automobiles, Education, Sports, Cinema technology is molding as a backbone to solve problems quickly and effectively. Technology is even used in the medical field. In pandemic situations, online medication is playing a crucial role. Technology can even be used in the agriculture field to identify crop diseases, which is a major problem for farmers. Even it spoils the environment to a great extent. Due to these, farmers are suffering huge losses. There are many reasons for this like the usage of more pesticides as these are very toxic and dangerous. If the diseases are predicted before, then these crop diseases can be removed or killed at the starting stage without causing much crop damage. Some people like experts can determine the disease by looking at the crop, that is by seeing external symptoms. But farmers don't have the connection with the experts. Our project deals with overcoming this problem by using concepts of artificial intelligence and cloud computing. The project goal is to predict crop disease. Farmers can use this project to predict crop disease at an earlier stage and get steps to remove the disease. We will develop an android app and a website that takes the cropped photo as input. Farmers should upload the affected crop images in the app, so those experts will observe the symptoms and predict the diseases. Here, the project interacts with experts and gets the required solutions. In the absence of experts, an Artificial Intelligence model is trained with the algorithm. This AI model learns from the images uploaded and the expert's instructions to predict the output with more accuracy. Here the cloud is used to save images uploaded by users. AI models are subjected to a large number of datasets that contain disease data and predict the output. The output is then validated by experts to evaluate the correctness of the output.
Vehicular Adhoc Networks (VANET) facilitate inter-vehicle communication using their dedicated connection infrastructure. Numerous advantages and applications exist associated with this technology, with road safety particularly noteworthy. Ensuring the transportation and security of information is crucial in the majority of networks, similar to other contexts. The security of VANETs poses a significant challenge due to the presence of various types of attacks that threaten the communication infrastructure of mobile vehicles. This research paper introduces a new security scheme known as the Soft Computing-based Secure Protocol for VANET Environment (SC-SPVE) method, which aims to tackle security challenges. The SC-SPVE technique integrates an adaptive neuro-fuzzy inference system and particle swarm optimisation to identify different attacks in VANETs efficiently. The proposed SC-SPVE method yielded the following average outcomes: a throughput of 148.71 kilobits per second, a delay of 23.60 ms, a packet delivery ratio of 95.62%, a precision of 92.80%, an accuracy of 99.55%, a sensitivity of 98.25%, a specificity of 99.65%, and a detection time of 6.76 ms using the Network Simulator NS2.
TasLA: An innovative Tasmanian and Lichtenberg optimized attention deep convolution based data fusion model for IoMT smart healthcare
The Internet of Medical Things (IoMT) bolstered the smart health care industry in present times by enabling quicker patient monitoring and disease diagnosis. However, there have been problems that need to be resolved using Artificial Intelligence (AI) methods. The major goal of this endeavor is to develop an IoMT-based data fusion system for multi-sensor smart healthcare network. To do this, a new optimization and deep learning approaches are being used in this work. In this research work, a unique smart healthcare framework, Tasmanian and Lichtenberg Optimized Attention Deep Convolution (TasLA) is developed for IoMT systems. This system uses an intelligent data fusion algorithms for collecting of medical data and the diagnosis of disorders. Here, data pretreatment and normalization processes are carried out in order to provide a dataset with balanced attribute information. The qualities or characteristics that will aid in classification are then selected using the most modern Tasmanian Devil Optimization (TDO) approach. The Attention Deep Convolution Classification (ADCC) algorithm is also used to classify the medical condition, thereby improving classification precision and reducing false predictions. To optimally compute the loss function during prediction, the Lichtenberg Optimization (LO) technique is employed to enhance classification performance. The effectiveness and results of the proposed TasLA model are validated and contrasted using various benchmark datasets such as Hungarian, Cleveland, Echocardiogram, and Z-Alizadeh.
Connotation of fuzzy logic system in Underwater communication systems for navy applications with data indulgence route
In former cohort most of the applications that are designed for examining the effect of underground water for naval operations follows a specific path for all types of mission operations. But the development and success of such process is usually carried out using specification in score type maneuvers. However the effect of naval operation can only be examined in real time by using a well-defined bi-conditional statement system that integrates the communication device for choosing intelligent route. Therefore a fuzzy logic system is implemented for observing the effect of underwater applications using a defined system model. But unlike in existing models the sonar type model with data handling technique is not used but a secured data transfer approach is created using the constraint statements. Further to prove the effectiveness of the developed system with fuzzy logic five scenarios are described with performance analysis where the outcomes proves that projected method provides high effectiveness for about 67% as compared to existing method.
Physical Stint Virtual Representation of Biomedical Signals With Wireless Sensors Using Swarm Intelligence Optimization Algorithm
Many people in society are facing problems related to health care, and diseases in the body are unable to be identified even with the presence of sensing technologies. The major reason for such failures in the identification process is that no virtual technologies are identified in the market. Most healthcare solicitations aim to design a particular application that provides information only about sensing values and fails to recall the virtual representation of that represented values. Therefore, this article provides an integration platform that connects the sensing devices with virtual reality/audio reality (VR/AR) techniques, which are applied in real time for detecting the presence of infections inside the body. In addition, one type of swarm intelligent algorithm is implemented in the recognition procedure with a modified fitness function and it is termed fruit fly optimization (FFO). The process of FFO provides much low layer perception, thus enhancing the output for smooth operation. To examine the real-time conditions, the projected AR/VR procedure is applied with biomedical sensors where three different case studies are separated. From the comparative numerical results, it is pragmatic that the proposed method provides better numerical results with 65% full-scale representations and less than 0.5 dB of distortion at 0.3% tuning force.
Web User Profile Generation and Discovery Analysis using LSTM Architecture
In today's technology-driven world, a user profile is a virtual representation of each user, containing various user information such as personal, interest and preference data. These profiles are the result of a user profiling process and are essential to personalizing the service. As the amount of information available on the Internet increases and the number of different users, customization becomes a priority. Due to the large amount of information available on the Internet, referral systems that aim to provide relevant information to users are becoming increasingly important and popular. Various methods, methodologies and algorithms have been proposed in the literature for the user analysis process. Creating automated user profiles is a big challenge in creating adaptive customized applications. In this work proposed the method, Long Short-Term Architecture (LSTM) is User profile is an important issue for both information and service customization. Based on the original information, the user's topic preference and text emotional features into attention information and combines various formats and LSTM (Long Short Term Memory) models to describe and predict the elements of informal community clients. At last, the trial consequences of different gatherings show that the concern-based LSTM model proposed can accomplish improved results than the right now regularly involved strategies in recognizing client character qualities, and the model has great speculation, which implies that it has this capacity.
Cervical Cancer Diagnosis Using Intelligent Living Behavior of Artificial Jellyfish Optimized With Artificial Neural Network
Cervical cancer affects nearly 4% of the women across the globe and leads to mortality if not treated in early stage. A few decades before, the mortality rate was too high when compared to the present statistics. This is achieved as nowadays most of women are aware of this disease and undergo health examination mainly for screening cervical cancer on regular basis. But only the accurate diagnosis can be helpful for further treatment. Many works are carried out for accurate diagnosis and always have some limitations in accurate predictions. In this work, an efficient algorithm is proposed for the accurate diagnosis of cervical cancer. A meta-heuristic called artificial Jellyfish search optimizer (JS) algorithm is combined with artificial neural network (ANN) to tackle this problem. The proposed algorithm is called JellyfishSearch_ANN and is employed to classify the cervical cancer dataset with four type of targets based on the examination. The JellyfishSearch_ANN provides outstanding results among other classifiers taken for comparison and mainly its classification accuracy is found to be above 98.87% for all targets.
Environmental Fault Diagnosis of Solar Panels Using Solar Thermal Images in Multiple Convolutional Neural Networks
Every year, each solar panel suffers an efficiency loss of 0.5% to 1%. This degradation of solar panels arises due to environmental and electrical faults. A timely and accurate diagnosis of environmental faults reduces the damage caused by faults on the panel. In recent years, deep learning precisely convolutional neural networks have achieved wonderful results in many applications. This work is focused on finely tuning pretrained models of convolutional neural networks, especially AlexNet, GoogleNet, and SqueezeNet. Based on the performance metrics, SqueezeNet is used for training thermal images of solar panels and for the classification of environmental faults. The results obtained show that SqueezeNet has a significant testing accuracy of 99.74% and F1 score of 0.9818, which make the model successful in identifying environmental faults in solar panels and help users to protect the panels.
Integrating Industrial Appliances for Security Enhancement in Data Point Using SCADA Networks with Learning Algorithm
The process of ensuring automatic operation for industrial appliances using both supervision and control techniques is a challenging task. Therefore, this article focuses on implementing Supervisory Control and Data Acquisition (SCADA) for controlling all industrial appliances. The design process of implementation case is performed using an analytical framework by examining the primary energy sources at the initial state; thus, a smart network is supported. The designed mathematical model is integrated with a learning technique that allocates resources at proper quantities. Further, the complex manual tuning of individual appliances is avoided in the projected method as the input variables are driven in a direct way at reduced loss state. In addition, the data processing state of individual appliances is carried out using central data controller where all parametric values are stored. In case any errors are observed, then SCADA network fixes the error in an automated way, reducing end-to-end delays in all appliances. To validate the effectiveness of the proposed method, five scenarios are examined and simulated where outcomes prove that SCADA network using learning models provides optimal results on an average of 84 percent as compared to the existing models without learning algorithm.
Classification of Normal and Anomalous Activities in a Network by Cascading C4.5 Decision Tree and K‐Means Clustering Algorithms
Cascades of information are a phenomena where individuals take a new action or thought because of their influence. As such technique is transmitted across a social network, broad adoption can occur. In the framework of suggestions and information dissemination on the blogosphere, we are considering cascades of information. Intrusion in a network environment poses a severe security risk. The intrusion detection system in the network is designed to detect attacks or malicious activity in a high-detection network while keeping a low false alarm rate. The system's behavior and flashing systems are monitoring important anomalies in the anomaly detection system (ADS). In this research, we present a method of identification of anomalies with "K-means + C4.5," the method of cascading k-means clustering and the decision tree method C4.5, for classifying anomalous and typical computer network operations. K-Means is the first clustering method for separating training into K clusters with a similarity in Euclidean distance. In each cluster, we create decision structures with algorithms from the decision tree C4.5, indicating a density area of typical or abnormal cases. The Decision Tree illustrates the decision constraints for each cluster by learning the subgroups inside this cluster. We use the findings from the decision tree for each class to get a final conclusion. However, the K-means+C4.5 model is shown to be slightly superior to predict computer network anomalous activities with a rating of 99.2% with true positive rate.
Improvement of the Resilience of a Microgrid Using Fragility Modeling and Simulation
The modern microgrid is designed to withstand various disruptive events that have a high probability of occurrence but have a low impact on the system. This improves the reliability of the system but does not take into consideration the disruptive events that have a low probability of occurrence but have a large impact on the system, such as extreme weather or natural disasters. Redesigning a microgrid to withstand low probability high impact events is very costly and is not a feasible solution to existing microgrids. This paper proposes a method to improve the resilience of an existing microgrid to quickly recover from low probability high impact events. The method used for this purpose is a combination of Monte Carlo simulation and prioritization of load of the microgrid. The efficacy of the method is examined by modeling microgrids using a fragility model. Using the proposed novel resilience index, the resilience of the IEEE 5-Bus system and IEEE 14-Bus system and the effect of load shedding on the resilience of the microgrid are analyzed and presented. The effect on smaller and larger grids and their resilience is examined. A novel resilience index is used to quantify the improvement of resilience of the proposed method when compared to other methods available in the literature.
Preserving Resource Handiness and Exigency-Based Migration Algorithm (PRH-EM) for Energy Efficient Federated Cloud Management Systems
On-demand computing ability and efficient service delivery are the major benefits of cloud systems. The limitation in resource availability in single data centers causes the extraction of additional resources from the cloud providers group. The federation scheme dynamically increases resource availability in response to service requests. The dynamic increase in resource count leads to excessive energy consumption, maximum cost, and carbon footprints emission. Hence, the reduction of resources is the major requirement to construct the optimized cloud source models for profit maximization without considering energy mix and CO2. This paper proposes the novel migration method to reduce carbon emissions and energy consumption. The initial stage in the proposed work is the categorization of data centers based on the MIPS and cost prior to job allocation offers scalable and efficient services and resources to the cloud user. Then, the job with the maximum size is allotted to the VM only if its capacity is less than the cumulative capacity of data centers. A novel migration based on overutilized and underutilized levels provides the services to the user even if the particular VM fails. The proposed work offers efficient maintenance of resource availability and maximizes the profit of the cloud providers associated with the federated cloud environment. The comparative analysis of the proposed algorithm with the existing methods regarding the response time, accuracy, profit, carbon emission, and energy consumption assures the effectiveness in a confederated cloud environment.
Probabilistic Framework Allocation on Underwater Vehicular Systems Using Hydrophone Sensor Networks
This article emphasis the importance of constructing an underwater vehicle monitoring system to solve various issues that are related to deep sea explorations. For solving the issues, conventional methods are not implemented, whereas a new underwater vehicle is introduced which acts as a sensing device and monitors the ambient noise in the system. However, the fundamentals of creating underwater vehicles have been considered from conventional systems and the new formulations are generated. This innovative sensing device will function based on the energy produced by the solar cells which will operate for a short period of time under the water where low parametric units are installed. In addition, the energy consumed for operating a particular unit is much lesser and this results in achieving high reliability using a probabilistic path finding algorithm. Further, two different application segments have been solved using the proposed formulations including the depth of monitoring the ocean. To validate the efficiency of the proposed method, comparisons have been made with existing methods in terms of navigation output units, rate of decomposition for solar cells, reliability rate, and directivity where the proposed method proves to be more efficient for an average percentile of 64%.
Unmanned aerial vehicles (UAVs) become a promising enabler for the next generation of wireless networks with the tremendous growth in electronics and communications. The application of UAV communications comprises messages relying on coverage extension for transmission networks after disasters, Internet of Things (IoT) devices, and dispatching distress messages from the device positioned within the coverage hole to the emergency centre. But there are some problems in enhancing UAV clustering and scene classification using deep learning approaches for enhancing performance. This article presents a new White Shark Optimizer with Optimal Deep Learning based Effective Unmanned Aerial Vehicles Communication and Scene Classification (WSOODL-UAVCSC) technique. UAV clustering and scene categorization present many deep learning challenges in disaster management: scene understanding complexity, data variability and abundance, visual data feature extraction, nonlinear and high-dimensional data, adaptability and generalization, real-time decision making, UAV clustering optimization, sparse and incomplete data. the need to handle complex, high-dimensional data, adapt to changing environments, and make quick, correct decisions in critical situations drives deep learning in UAV clustering and scene categorization. The purpose of the WSOODL-UAVCSC technique is to cluster the UAVs for effective communication and scene classification. The WSO algorithm is utilized for the optimization of the UAV clustering process and enables to accomplish effective communication and interaction in the network. With dynamic adjustment of the clustering, the WSO algorithm improves the performance and robustness of the UAV system. For the scene classification process, the WSOODL-UAVCSC technique involves capsule network (CapsNet) feature extraction, marine predators algorithm (MPA) based hyperparameter tuning, and echo state network (ESN) classification. A wide-ranging simulation analysis was conducted to validate the enriched performance of the WSOODL-UAVCSC approach. Extensive result analysis pointed out the enhanced performance of the WSOODL-UAVCSC method over other existing techniques. The WSOODL-UAVCSC method achieved an accuracy of 99.12%, precision of 97.45%, recall of 98.90%, and F1-score of 98.10% when compared to other existing techniques.
The Internet of Things (IoT) is extensively used in modern-day life, such as in smart homes, intelligent transportation, etc. However, the present security measures cannot fully protect the IoT due to its vulnerability to malicious assaults. Intrusion detection can protect IoT devices from the most harmful attacks as a security tool. Nevertheless, the time and detection efficiencies of conventional intrusion detection methods need to be more accurate. The main contribution of this paper is to develop a simple as well as intelligent security framework for protecting IoT from cyber-attacks. For this purpose, a combination of Decisive Red Fox (DRF) Optimization and Descriptive Back Propagated Radial Basis Function (DBRF) classification are developed in the proposed work. The novelty of this work is, a recently developed DRF optimization methodology incorporated with the machine learning algorithm is utilized for maximizing the security level of IoT systems. First, the data preprocessing and normalization operations are performed to generate the balanced IoT dataset for improving the detection accuracy of classification. Then, the DRF optimization algorithm is applied to optimally tune the features required for accurate intrusion detection and classification. It also supports increasing the training speed and reducing the error rate of the classifier. Moreover, the DBRF classification model is deployed to categorize the normal and attacking data flows using optimized features. Here, the proposed DRF-DBRF security model's performance is validated and tested using five different and popular IoT benchmarking datasets. Finally, the results are compared with the previous anomaly detection approaches by using various evaluation parameters.
Artificial intelligence (AI) can be used in a variety of fields and has the potential to alter how we currently view farming. Due to its emphasis on effectiveness and usability artificial intelligence has the largest impact on agriculture of all industries. We highlight the automation-supporting technologies such as Artificial Intelligence (AI), Machine Learning, and Long-Range (LoRa) technology which provides data integrity and protection. We also offer a structure for smart farming that depends on the location of data processing after a comprehensive investigation of numerous designs. As part of our future study we have divided the unresolved difficulties in smart agriculture into two categories such as networking issues and technology issues. Artificial Intelligence and Machine Learning are examples of technologies whereas the Moderate Resolution Imaging Spectroradiometer satellite and LoRa are used for all network-related jobs. The goal of the research is to deploy a network of sensors throughout agricultural fields to gather real-time information on a variety of environmental factors including temperature, humidity, soil moisture and nutrient levels. The seamless data transmission and communication made possible by these sensors’ integration with Internet of Things technologies. With the use of AI techniques and algorithms the gathered data is examined. The technology may offer practical insights and suggestions for improving agricultural practices because the AI models are trained to spot patterns, correlations, and anomalies in the data. We are also focusing on indoor farming by supplying Ultra Violet radiation and artificial lighting in accordance with plant growth. When a pest assault is detected using AI and LoRa even in poor or no network coverage area and notifies the farmer’s mobile in any part of the world. The irrigation system is put to the test with various plants at various humidity and temperature levels in both dry and typical situations. To keep the water content in those specific regions soil moisture sensors are used.
Integrated Probabilistic Relevancy Classification (PRC) Scheme for Intrusion Detection in SCADA Network
Detecting and identifying intrusions in a network is a challenging research area in the network security domain. Intrusion detection plays an essential role in computer network security since long. An Intrusion Detection System (IDS) is mainly used to detect an unauthorized access to a computer system or network. Moreover, it is capable to detect all types of malicious and harmful attacks in a network. The drawbacks of existing IDS are it can detect only the known attacks and it produces a large number of false alarms due to the unpredictable behavior of users and networks. It also requires extensive training sets in order to characterize the normal behavior of the nodes. In order to overcome these issues, an integration of Hidden Markov Model (HMM)–Relevance Vector Machine (RVM) algorithm namely, Probabilistic Relevance Classification (PRC) is proposed to detect intrusions in Supervisory Control and Data Acquisition (SCADA) network. Here, the power system attack dataset is used to detect the attacks in an SCADA network. In the preprocessing stage, the given data is preprocessed to segregate the relays as R1, R2, R3n and R4. Each relay contains the date, timestamp, control panel log report, relay log report, snort log report, marker, fault location, and load condition information. Then, the Boyer–Moore (BM) technique is employed to perform the string matching operation. After that, the PRC technique is implemented to classify the attack as known or unknown. The novelty of this paper is it manually trains the data and features for unknown attacks. The main intention of this work is to reduce the set of features, amount of database, and to increase the detection rate. The experimental results evaluate the performance in terms of False Acceptance Rate (FAR), False Rejection Rate (FRR), Genuine Acceptance Rate (GA), sensitivity, specificity, accuracy, error rate, and recall.
Mining of intrusion attack in SCADA network using clustering and genetically seeded flora‐based optimal classification algorithm
The applications such as the remote communication and the control system are in critically integrated arrangement. The controlling of these network is specified by supervisory control and data acquisition (SCADA) systems. This study discusses about the attack prediction and classification process by using an enhanced model of machine learning technology. The attack types are classified by the optimal selection of features extracted from the sensor data. In this, the features are labelled and cluster between the matrixes are extracted. These cluster forms the initial processing of attack identification which prevents the mismatched result. This clustering of data is performed by mean‐shift clustering algorithm. From that clustered data, the features that are irrelevant for classification process is identified and suppressed by using the genetically seeded flora optimisation algorithm. In this optimisation process, the flora seeds are selected genetically to select best features. Then, from that optimally selected clustered data, the relevancy vector is predicted and the types are classified. The classification process is performed by the Boltzmann machine learning algorithm. The classified results of the proposed method for testing SCADA dataset are analysed and the performance metrics are evaluated and compared with the state‐of‐the‐art methods.
Data Security in Cloud Computing Using Abe-Based Access Control
Business organizations and individual users are using cloud storage for storing their data and files. Cloud storage is managed by cloud service provider (CSP) being third party person to the data owners. Cloud storage consists of user's confidential data. After storing data in cloud, the owner of data cannot have control over data, where owner cannot trust the CSP because possibility of a malicious administrator. Based on this, different schemes are proposed. Security is a major concern for cloud stored data, and CSP has to provide trust to the data owner on security of the cloud stored data. In general, security to data and applications is provided through authentication and authorization. Security through authentication is provided by distributing user name and password to data users. However, the organizational user is not allowed to access all the organizational data. Authorization for accessing the data is provided by using access control models. Regular models are not enough to use the CSP based on the models uses dynamic method and proposed different models using attribute-based encryption (ABE). Earlier access control models cannot be used because of multiple disadvantages. This chapter will discuss dynamic access control model named as RA-HASBE. This model is proved to be scalable and flexible, due to sub-domain hierarchy. It is also proved to be dynamic by permitting user to access the data by risk evaluation using risk engine.
Impact of Big Data Analysis on Nanosensors for Applied Sciences Using Neural Networks
In the current-generation wireless systems, there is a huge requirement on integrating big data which can able to predict the market trends of all application systems. Therefore, the proposed method emphasizes on the integration of nanosensors with big data analysis which will be used in healthcare applications. Also, safety precautions are considered when this nanosensor is integrated where depth and reflection of signals are also observed using different time samples. In addition, to analyze the effect of nanosensors, six fundamental scenarios that provide good impact on real-time applications are also deliberated. Moreover, for proving the adeptness of the proposed method, the results are equipped in both online and offline analyses for investigating error measurement, sensitivity, and permeability parameters. Since nanosensors are introduced, the efficiency of the projected technique is increased by implementing media access control (MAC) protocol with recurrent neural network (RNN). Further, after observing the simulation results, it is proved that the proposed method is more effective for an average percentile of 67% when compared to the existing methods.
An Enriched RPCO-BCNN Mechanisms for Attack Detection and Classification in SCADA Systems
Providing security to the Supervisory Control and Data Acquisition (SCADA) systems is one of the demanding and crucial tasks in recent days, due to the different types of attacks on the network. For this purpose, there are different types of attack detection and classification methodologies have been developed in the conventional works. But it limits with the issues like high complexity in design, misclassification results, increased error rate, and reduced detection efficiency. In order to solve these issues, this paper aims to develop an advanced machine learning models for improving the SCADA security. This work comprises the stages of preprocessing, clustering, feature selection, and classification. At first, the Markov Chain Clustering (MCC) model is implemented to cluster the network data by normalizing the feature values. Then, the Rapid Probabilistic Correlated Optimization (RPCO) mechanism is employed to select the optimal features by computing the matching score and likelihood of particles. Finally, the Block Correlated Neural Network (BCNN) technique is employed to classify the predicted label, where the relevancy score is computed by using the kernel function with the feature points. During experimentation, there are different performance indicators have been used to validate the results of proposed attack detection mechanisms. Also, the obtained results are compared with the RPCO-BCNN mechanism for proving the superiority of the proposed attack detection system.
A machine learning algorithm for classification of mental tasks
In this article, a contemporary tack of mental tasks on cognitive parts of humans is appraised using two different approaches such as wavelet transforms at a discrete time (DWT) and support vector machine (SVM). The put forth tack is instilled with the electroencephalogram (EEG) database acquired in real-time from CARE Hospital, Nagpur. Additional data is also acquired from a brain-computer interface (BCI). In the working model, signals from the database are wed out into different frequency sub-bands using DWT. Initially, updated statistical features are obtained from different frequency sub-bands. This type of representation defines the wavelet co-efficient which is introduced for reducing the measurement of data. Then, the projected method is realized using SVM for segregating both port and veracious hand movement. After segregation of EEG signals, results are achieved with an accuracy of 92% for BCI competition paradigm III and 97.89% for B-alert machine.
Mechanism of Internet of Things (IoT) Integrated with Radio Frequency Identification (RFID) Technology for Healthcare System
Radio frequency identification (RFID) technology has already demonstrated its use. RFID is used in many productions for different applications, for example, apparatus chasing, personal and vehicle access panels, logistics, baggage, and safety items in departmental stores. The main benefits of RFID are optimizing resources, quality customer service, improved accuracy, and efficient business and healthcare procedures. In addition, RFID can help to recognize appropriate information and help advance the probability of objects for certain functions. Nevertheless, RFID components need to be studied for use in healthcare. Antennas, tags, and readers are the main components of RFID. The study of these elements provides an understanding of the usage and integration of these components in healthcare environments. The security of the patient is now a global alarm for public health, particularly among older people who need integrated and technologically integrated physiological health monitoring systems to monitor medical needs and manage them. This paper proposes using Internet of Things (IoT) and RFID tags as an effective healthcare monitoring system. In this method, we utilize RFID dual-band protocols that are useful for identifying individual persons and are used to monitor body information using high frequency. The patient’s physiological data are monitored and collected by sensors to recognize the patient, using an RFID tag. The IoT-based RFID healthcare provides the elderly and people with physiological information. The aim is also to secure patient health records using the signing algorithm based on the hyperelliptic curve (HEC) and to provide the physician with access to health information for patients. Furthermore, the confidentiality of the medical records for patients of variable length is provided. The evaluation reveals the algorithm proposed for optimum health care with different genus curves.
Machine Learning Empowered Accurate CSI Prediction for Large-Scale 5G Networks
Wi-Fi networks rely on channel estimation to ensure their performance. The computational complexity and dependability of fifth generation telecommunication networks have significantly improved using supervised learning. In this paper, we develop a channel estimation model that uses a machine learning approach and the study uses multipath channel simulations for the estimation of channel state information (CSI) over arbitrary transceiver antennas. The simulation is conducted to test the efficacy of the model against various machine learning channel estimation models. The results of simulation show that the proposed model obtains increased channel estimation quality than other methods. Further, the bit error rate is recorded low among other methods using the machine learning model. Thus, it is seen that the proposed method achieves a reduced mismatch rate of than other methodson Doppler frequency during channel estimation, where the mismatch rate is higher in existing methods.
Detecting Impersonators in Examination Halls Using AI
Detecting impersonators in examination halls is very significant to conduct the examination fairly. Recently there were occasions of potential impersonators taking tests instead of the intended person. To overcome this issue, we require an efficient method with less manpower. With the advancement of machine learning and AI technology, we can overcome this issue. In this project, we are developing an AI system where images of students are saved and the model is developed using transfer learning process to get accurate results. If the student is an allowed one, it shows the hall ticket number and name of the student otherwise it appears unknown tag.
Biomedical Signals for Healthcare Using Hadoop Infrastructure with Artificial Intelligence and Fuzzy Logic Interpretation
In all developing countries, the application of biomedical signals has been growing, and there is a potential interest to apply it to healthcare management systems. However, with the existing infrastructure, the system will not provide high-end support for the transfer of signals by using a communication medium, as biomedical signals need to be classified at appropriate stages. Therefore, this article addresses the issues of physical infrastructure, using Hadoop-based systems where a four-layer model is created. The four-layer model is integrated with Fuzzy Interface System Algorithm (FISA) with low robustness, and data transfers in these layers are carried out with reference health data that are collected at various treatment centers. The performance of this new flanged system model aims to minimize the loss functionalities that are present in biomedical signals, and an activation function is introduced at the middle stages. The effectiveness of the proposed model is simulated by using MATLAB, using a biomedical signal processing toolbox, where the performance of FISA proves to be better in terms of signal strength, distance, and cost. As a comparative outcome, the proposed method overlooks the conventional methods for an average percentage of 78% in real-time conditions.
IDS Detection Based on Optimization Based on WI-CS and GNN Algorithm in SCADA Network
Industry control systems (ICS) are considered as one of the inevitable systems in this contemporary smart world. In that supervisory control and data acquisition (SCADA) is the centralized system that control the entire grid. When a system is considered to be a whole and sole control, obviously an uncompromised security would be the prime. By having that as a major concern, a lot of research is being done on IDS security. In spite of that it has several cons including increased fake positive and fake negative rates, which will invariably lead to a larger chaos. To get rid of these problems, a weighted-intrusion based cuckoo search (WI-CS) and graded neural network (GNN) methods are proposed in this chapter. The key purpose of this chapter is to identify and categorize the anomalies in a SCADA system through data optimization. At initial stage, the collected real-time SCADA dataset is given as input. Then, by using the aforementioned proposed machine learning algorithms, these data are clustered and optimized. Later to find, the type of intrusion will remain as a further challenge and for that we propose HNA-AA algorithm. The investigational results estimate the efficiency of the system by considering sensitivity, false detection rate, precision, recall, Jaccard, accuracy, dice and specificity.
In this paper the importance of monitoring smart city with integration of sensors and Internet of Things (IoT) is discussed with establishment of node control process. To describe the feature of smart cities time measurements are considered with one hop distance between various nodes. Hence the system model is established for various parameters that are integrated with K-means algorithm for clustering and C4.5 for classification. As a result of combining the dual algorithms with system model it is possible to establish a secured state for each data with proper response factor. The major significance of proposed method is to introduce node point where all increasing queues in smart cities are controlled due to the information that is achieved from every data points. Moreover the improvement in projected model can be observed with four scenarios where security at every data point plays an important role at an increased level of 84%. In addition to security the amount of stable points is increased with reduction in disparities for about 2% thereby every applications in smart cities are monitored in a precise way.
Renewable energy sources are playing a leading role in today's world. However, integrating these sources into the distribution network through power electronic devices can lead to power quality (PQ) challenges. This work addresses PQ issues by utilizing a shunt active power filter in combination with an Energy Storage System (ESS), a Wind Energy Generation System (WEGS), and a Solar Energy System. While most previous research has relied on complex methods like the synchronous reference frame (SRF) and active-reactive power (pq) approaches, this work proposes a simplified approach by using a neural network (NN) for generating reference signals, along with the design of a five-level reduced switch voltage source converter. The gain values of the proportional-integral controller (PIC), as well as the parameters for the shunt filter, boost, and buck-boost converters in the WEGS and ESS, are optimally selected using the horse herd optimization algorithm. Additionally, the weights and biases for the neural network (NN) are also determined using this method. The proposed system aims to achieve three key objectives: (1) stabilizing the voltage across the DC bus capacitor; (2) reducing total harmonic distortion (THD) and improving the power factor; and (3) ensuring superior performance under varying demand and PV irradiation conditions. The system's effectiveness is evaluated through three different testing scenarios, with results compared against those obtained using the genetic algorithm, biogeography-based optimization (BBO), as well as conventional SRF and pq methods with PIC. The results clearly demonstrate that the proposed method achieves THD values of 3.69%, 3.76%, and 4.0%, which are lower than those of the other techniques and well within IEEE standards. The method was developed using MATLAB/Simulink version 2022b.
Eeg based smart emotion recognition using meta heuristic optimization and hybrid deep learning techniques
In the domain of passive brain-computer interface applications, the identification of emotions is both essential and formidable. Significant research has recently been undertaken on emotion identification with electroencephalogram (EEG) data. The aim of this project is to develop a system that can analyse an individual’s EEG and differentiate among positive, neutral, and negative emotional states. The suggested methodology use Independent Component Analysis (ICA) to remove artefacts from Electromyogram (EMG) and Electrooculogram (EOG) in EEG channel recordings. Filtering techniques are employed to improve the quality of EEG data by segmenting it into alpha, beta, gamma, and theta frequency bands. Feature extraction is performed with a hybrid meta-heuristic optimisation technique, such as ABC-GWO. The Hybrid Artificial Bee Colony and Grey Wolf Optimiser are employed to extract optimised features from the selected dataset. Finally, comprehensive evaluations are conducted utilising DEAP and SEED, two publically accessible datasets. The CNN model attains an accuracy of approximately 97% on the SEED dataset and 98% on the DEAP dataset. The hybrid CNN-ABC-GWO model achieves an accuracy of approximately 99% on both datasets, with ABC-GWO employed for hyperparameter tuning and classification. The proposed model demonstrates an accuracy of around 99% on the SEED dataset and 100% on the DEAP dataset. The experimental findings are contrasted utilising a singular technique, a widely employed hybrid learning method, or the cutting-edge method; the proposed method enhances recognition performance.
A conjugate self-organizing migration (CSOM) and reconciliate multi-agent Markov learning (RMML) based cyborg intelligence mechanism for smart city security
Ensuring the privacy and trustworthiness of smart city—Internet of Things (IoT) networks have recently remained the central problem. Cyborg intelligence is one of the most popular and advanced technologies suitable for securing smart city networks against cyber threats. Various machine learning and deep learning-based cyborg intelligence mechanisms have been developed to protect smart city networks by ensuring property, security, and privacy. However, it limits the critical problems of high time complexity, computational cost, difficulty to understand, and reduced level of security. Therefore, the proposed work intends to implement a group of novel methodologies for developing an effective Cyborg intelligence security model to secure smart city systems. Here, the Quantized Identical Data Imputation (QIDI) mechanism is implemented at first for data preprocessing and normalization. Then, the Conjugate Self-Organizing Migration (CSOM) optimization algorithm is deployed to select the most relevant features to train the classifier, which also supports increased detection accuracy. Moreover, the Reconciliate Multi-Agent Markov Learning (RMML) based classification algorithm is used to predict the intrusion with its appropriate classes. The original contribution of this work is to develop a novel Cyborg intelligence framework for protecting smart city networks from modern cyber-threats. In this system, a combination of unique and intelligent mechanisms are implemented to ensure the security of smart city networks. It includes QIDI for data filtering, CSOM for feature optimization and dimensionality reduction, and RMML for categorizing the type of intrusion. By using these methodologies, the overall attack detection performance and efficiency have been greatly increased in the proposed cyborg model. Here, the main reason of using CSOM methodology is to increase the learning speed and prediction performance of the classifier while detecting intrusions from the smart city networks. Moreover, the CSOM provides the optimized set of features for improving the training and testing operations of classifier with high accuracy and efficiency. Among other methodologies, the CSOM has the unique characteristics of increased searching efficiency, high convergence, and fast processing speed. During the evaluation, the different types of cyber-threat datasets are considered for testing and validation, and the results are compared with the recent state-of-the-art model approaches.
Certain investigation on optimization technique for sensor nodes in the bio medical recording system
The creation of sensor-based software for health monitoring using Internet of Things (IoT) technology is the main goal of this project. The program’s objective is to continuously monitor human physiological data, including ECG, SPO2, heart rate, and respiration, by employing biomedical sensor networks. These sensors collect data, which is then processed by a processor and transmitted to an edge server through a transceiver. A node of corner facilitates for real transmission has processed each data will be patient’s phone and the clinicians’ LED display. To address the optimization challenge, the program utilizes a Double Deep-Q-Network approach, with parameters optimized using a hybrid genetic algorithm-based simulated annealing technique. However, healthcare records obtained from the sensors are susceptible to change due to environmental factors, leading to potential performance issues. In order to overcome this challenge, an optimization approach is employed to refine the proposed technique, ensuring accurate prediction of readings. The study conducted experiments to evaluate the program’s performance, utilizing various metrics and different parameters. The results to provide light on how well the program that was created for leveraging IoT technologies for health monitoring is working. This study presents an innovative sensor-based program for IoT technology-based health monitoring, which continuously monitors human physiological data. The program incorporates a hybrid optimization approach to ensure accurate prediction of readings, accounting for environmental factors. The proposed Double Deep-Q-Network and the evaluation metrics employed demonstrate the originality and contributions of this research in advancing health monitoring systems.
This study explores the feasibility of allocating finite resources beyond fifth generation networks for extended reality applications through the implementation of enhanced security measures via offloading analysis (RLIS). The quantification of resources is facilitated through the utilization of parameters, namely energy, capacity, and power, which are equipped with proximity constraints. These constraints are then integrated with activation functions in both multilayer perceptron and long short term memory models. Furthermore, the system model has been developed using vision-based computing, which involves managing data queues in terms of waiting periods to minimize congestion for data transmission with limited resources. The major significance of the proposed method is to utilize allocated spectrums for future generation networks by allocating necessary resources and therefore high usage of resources by all users can be avoided. In addition the advantage of the proposed method is secure the networks that operate beyond 5G where more number of users will try to share the allocated resources that needs to be provided with high security conditions.
Effective management and control of large-scale networks can be challenging in the absence of appropriate resource allocation. This paper presents a framework for highlighting the significance of resource allocation in mobile, wireless, and ad hoc networks. The model has been designed to incorporate a clustering protocol and a schedule-based resource allocation algorithm, resulting in the establishment of a multi-objective framework. The proposed framework places a significant emphasis on the allocation of energy and distance, with a focus on minimizing these objectives. Each node is separated into several clusters where individual energy is allocated and the cluster head in each cluster allows the nodes to communication with shortest distance. For the transmitted information the speed of transmission is maximized thus more amount of time period is saved where stability factor is maximized. To test the allocated resources in the network the proposed method compares and evaluates the parametric outcomes with existing method based on five scenarios. In the comparative analysis it is observed that proposed method can able to maximize the life time and quality of service for all networks with optimized range of 84%.
MSI-A: An Energy Efficient Approximated Cache Coherence Protocol
Energy consumption has become an essential factor in designing modern computer system architecture. Because of physical limits, the termination of Moore’s law and Dennard’s scaling has forced the computer design community to investigate new approaches to meet the requirements for computing resources. Approximate computing has emerged as a promising method for reducing energy consumption while trading a controllable quality loss. This paper asserts that an approximated cache coherence protocol preserves overall energy for computation. We can approximate the cache coherence protocol by adding approximated cache lines to a certain level without hindering the output. This paper introduces an enhanced approximated version of the MSI (Modified Shared Invalid) cache coherence protocol MSI-A (Modified Shared Invalid-Approx). We have verified MSI-A and MSI by employing LTL specifications in the NuSMV model checker. To illustrate the benefits of MSI-A, we have added DTMC (Discrete-Time Markov Chain) with PCTL (Probabilistic Computational Tree Logic). Although the PCTL proves the theory of approximation, we have also simulated the MSI-A in the TEJAS hardware simulator on PARSEC 3.0 to investigate the energy gains and cycle gains of MSI-A in varied applications. The cache lines considered to be approx are between 10 and 30 percent. Each application benefited from approximation according to its nature, and VIPS has indicated a total energy gain of 30.18 percent.
Connotation of Unconventional Drones for Agricultural Applications with Node Arrangements Using Neural Networks
In the process of drone development, most of the current state systems’ design is based on high-weight functionalities. Due to high-weight functionalities, it is observed that if the drone drops at a particular point, the entire design is fragmented. Also, well-defined functionalities of drones for a specific application can only be designed if radial functionalities are defined at proper angles. Therefore, this article addresses the issues present in the existing method using the CRA algorithm, where radial functions, represented in terms of input and hidden weighting functions, are explored utterly. Additionally, a novel analytical procedure that establishes the coverage area for the data transfer approach has been incorporated into the drones’ architecture. Additionally, employing motion signatures and a special identification system, the developed drone system can function along various paths. To evaluate the effectiveness of the suggested system, three scenarios are organized as a basic functionality model. With the right scattering ratio, the comparison inscriptions show that the proposed approach can achieve an 82% success rate.
Geometric Optimisation of Unmanned Aerial Vehicle Trajectories in Uncertain Environments
The problem of efficient trajectory optimisation for Unmanned Aerial Vehicles (UAVS) in dynamic and constrained environments is one where energy efficiency, spatial coverage, and path smoothness need to be balanced. The existing methods, namely RRT*, A*, and Dijkstra, are popular but generally heuristic and do not provide globally optimal solutions. They face significant limitations while dealing with complex geometries, dynamic obstacles, and multi-objective requirements. These challenges call for a mathematically sound framework that seamlessly integrates convex analysis and computational geometry to provide an optimal trajectory planning framework. This research work introduces a convex optimisation framework for UAV trajectory planning which unifies multiple objectives, like minimising energy consumption, maximising spatial coverage, and ensuring the smoothness of the path, into a single convex objective function. More importantly, it indicates that obstacle dynamics and uncertain environmental conditions are handled better by it, so it is relatively easier for safe and efficient navigation. Proven to converge faster and with higher precision than RRT*, A*, and Dijkstra, the approach proposed here enjoys intrinsic convex properties, which ensure global optimality. Qualitative measurements show the efficiency of the proposed framework. The result is energy efficiency of 90%, with 92% coverage, 98% constraint satisfaction, and 95% path smoothness, which is 15-25% better on all metrics than traditional approaches can offer. By bridging between theory in convex optimisation and practice for solving multi-objective problems in a dynamic setting, this study provides a more robust solution for UAV trajectory planning.
Leveraging stacking machine learning models and optimization for improved cyberattack detection
The ever-growing number of complex cyber attacks requires the need for high-level intrusion detection systems (IDS). While the available research deals with traditional, hybrid, and ensemble methods for network data analysis, serious challenges are still being met in terms of producing robust and highly accurate detection systems. There are high hurdles in managing high-dimensional network traffic since current methodologies are limited in dealing with imbalanced data issues of minority classes versus the majority and high false positive rate in classification accuracy. This study introduces an innovative framework that directly addresses these persistent challenges through a novel approach to intrusion detection. The proposed method integrates two ML models: J48 and ExtraTreeClassifier for classification. Besides, we propose an improved equilibrium optimizer (EO) approach whereby the previous EO is modified. In this enhanced equilibrium optimizer (EEO), the Fisher score and accuracy score of the K-Nearest Neighbors (KNN) algorithm select the attributes optimally, whereas the synthetic minority oversampling technique combined with iterative partitioning filters (SMOTE-IPF) used to provide class balancing. The KNN technique is also used for data imputation to improve the overall system accuracy. The superior performance of the framework has been validated experimentally on several benchmark datasets, i.e., NSL-KDD, and UNSW-NB15, achieving 99.7% and 98.1% accuracy and F1 score 99.6 and 98.0 respectively. By subjecting the system to a comparative analysis with recent state-of-the-art works, this paper has shown that the proposed methodology yields better improvement in feature selection precision classification accuracy, handling of minority class instance, less demanding storage and computational efficiency.
This article presents the utilization of a shunt active power filter (SHAPF) in combination with an Energy Storage System (ESS) and a Solar Energy System (SES). Voltage source converters (VSC) are connected in parallel to a direct current (DC) bus. The membership function (MSF) of fuzzy logic controller (FLC) for the shunt control system is optimally adjusted using the golden balloptimization algorithm (GBOA). The present effort aims to achieve the following primary objectives: 1) Quick implementation to stabilize the voltage of the DC Link capacitor (DCLCV); 2) Mitigation of harmonics and improvement of power factor (PF); 3) Satisfactory performance under load as well as solar power varying conditions. The effectiveness of the optimally designed controller is evaluated by studying four test scenarios with grid and standalone conditions. The results are then compared to the existing sliding mode (SMC) and fuzzy logic controllers (FLC).
The behavior and performance of distribution systems have been significantly impacted by the presence of solar and wind based renewable energy sources (RES) and battery energy storage systems (BESS) based electric vehicle (EV) charging stations. This work designs the Unified Power Quality Conditioner (UPQC) through optimal selection of the active filter and PID Controller (PIDC) parameters using the enhanced most valuable player algorithm (EMVPA). The prime objective is to effectively address the power quality (PQ) challenges such as voltage distortions and total harmonic distortions (THD) of a distribution system integrated with UPQC, solar, wind, BESS and EV (U-SWBEV). The study also aims to manage the power flow between the RES, grid, EV, BESS, and consumer loads by artificial neuro-fuzzy interface system (ANFIS). Besides, this integration helps to have a reliable supply of electricity, efficient utilization of generated power, and effective fulfillment of the demand. The proposed scheme results in a THD of 4.5%, 2.26%, 4.09% and 3.98% for selected four distinct case studies with power factor to almost unity with an appropriate power sharing. Therefore, the study and results indicate that the ANFIS based power flow management with optimal design of UPQC addresses the PQ challenges and achieves the appropriate and effective sharing of power.
Multi-objective quantum hybrid evolutionary algorithms for enhancing quality-of-service in internet of things
In the context of Internet of Things (IoT), optimizing quality of service (QoS) parameters is a critical challenge due to its heterogeneous and resource-constrained nature. This paper proposes a novel quantum-inspired multi-objective optimization algorithm for IoT service management. Traditional multi-objective optimization algorithms often face limitations such as slow convergence and susceptibility to local optima, reducing their effectiveness in complex IoT environments. To address these issues, we introduce a quantum-inspired hybrid algorithm that combines the strengths of Multi-Objective Grey Wolf Optimization Algorithm (MOGWOA) and Multi-Objective Whale Optimization Algorithm (MOWOA), enhanced with quantum principles. This novel integration overcomes the limitations of traditional algorithms by improving convergence speed and avoiding local optima. The hybrid algorithm enhances QoS in IoT applications by achieving superior optimization in terms of energy efficiency, latency reduction, convergence, and coverage cost. The incorporation of quantum-inspired mechanisms, such as quantum position and behavior, strengthens the exploration and exploitation capabilities of the algorithm, enabling faster and more accurate optimization. Extensive simulations and testing demonstrate the proposed method’s superior performance compared to existing algorithms, validating its effectiveness in addressing key IoT challenges.
Prediction of malnutrition in kids by integrating ResNet-50-based deep learning technique using facial images
In recent times, severe acute malnutrition (SAM) in India is considered a serious issue as per UNICEF 2022 records. In that record, 35.5% of children under age 5 are stunted, 19.3% are wasted, and 32% are underweight. Malnutrition, defined as these three conditions, affects 5.7 million children globally. This research utilizes an artificial intelligence-based image segmentation technique to predict malnutrition in children. The primary goal of this research is to use a deep learning model to eliminate the need for multiple manual diagnostic tests and simplify the prediction of malnutrition in kids. The traditional model uses text-based data and takes more time with continuous monitoring of kids by analysing body mass index (BMI) over different periods. Children in rural areas often miss medical expert appointments, and a lack of knowledge among parents can lead to severe malnutrition. The aim of the proposed system is to eliminate the need for manual blood tests and regular visits to medical experts. This study uses the ResNet-50 deep learning model’s built-in shortcut connection to solve the image-based vanishing gradient problem. This makes training more efficient for image segmentation tasks in predicting malnutrition. The model is 98.49% accurate in predicting the kids who are malnourished among the kids who are healthy. It is evident from the results that the proposed system serves better than other deep learning models, such as XG Boost (75.29% accuracy), VGG 16 (94% accuracy), Xception (95.41% accuracy), and MobileNet (92.42% accuracy). Hence, the proposed technique is effective in detecting malnutrition and diagnose it earlier, without using predictive analysis function or advice from the medical experts.
The productivity of agriculture plays a critical role in the Indian economy. Growing crop production is a critical responsibility nowadays to accommodate citizen demand and provide farmers with greater rewards. Therefore, a machine learning (ML) technique is employed to more precisely identify diseases and pests on leaves and other crop parts. This paper introduces a machine learning-based system in early crop disease and pest detection using image processing and optimization. Initially, the data is collected from the CCMT plant disease Dataset. Image augmentation techniques such as rotation, flipping, and zooming are utilized to make the dataset wholesome. After amplification, the pre-processing is carried out on these images. Noise reduction as well as enhancing quality are done by Adaptive Bilateral Filter. Lanczos interpolation technique resized it and normalization is done so that the analysis can proceed. Kapur's Entropy-based Whale Optimization is introduced for the segmentation of the image efficiently by dividing diseased areas into segments. The features are extracted using the Gray Level Co-occurrence Matrix, which assesses relationships among the pixels and produces an appropriate feature matrix for color images. This processed data then feeds into a Moth-Flame Optimized Recurrent Neural Network for crop disease and pest detection. These results achieved high accuracy levels at 98.4% for cashews, 98.3% for cassava, 98.5% for maize, and 96.8% for tomato crops, outperforming all the reported techniques.
In public spaces, threats to societal security are a major concern, and emerging technologies offer potential countermeasures. The proposed intelligent person identification system monitors and identifies individuals in public spaces using gait, face, and iris recognition. The system employs a multimodal approach for secure identification and utilises deep convolutional neural networks (DCNNs) that have been pretrained to predict individuals. For increased accuracy, the proposed system is implemented on a cloud server and integrated with citizen identification systems such as Aadhar/SSN. The performance of the system is determined by the rate of accuracy achieved when identifying individuals in a public space. The proposed multimodal secure identification system achieves a 94% accuracy rate, which is higher than that of existing public space person identification systems. Integration with citizen identification systems improves precision and provides immediate life-saving assistance to those in need. Utilising secure deep learning techniques for precise person identification, the proposed system offers a promising solution to security threats in public spaces. This research is necessary to investigate the efficacy and potential applications of the proposed system, including accident identification, theft identification, and intruder identification in public spaces.
Reconnoitering the significance of security using multiple cloud environments for conveyance applications with blowfish algorithm
In recent years the process of transportation needs a highly effective traffic system in order to monitor all consumer goods as many goods are left out at different locations. To handle such moving cases cloud platform is highly helpful as with respect to geographical location the goods are mapped in correct form. However incorporation of single cloud platform does not provide sufficient amount of storage about all goods thus a multiple cloud platform is introduced in proposed system. As multiple cloud platform is provided the security features of each data base system is also checked and enhanced using encryption keys. Moreover for proper operating conditions of multiple cloud platforms an analytical model is designed that synchronizes necessary data at end system. The defined analytical model focuses on solving multiple objectives that are related to critical energy problems where demand problems are reduced. Further the encryption process is carried out using Improved BlowFish Algorithm (IBFA) by allocating proper resources with decryption keys. To validate the effectiveness of proposed method five scenarios are considered where all scenario outcomes proves to be much higher than existing models by an average of 43%.
Mathematical approach of fiber optics for renewable energy sources using general adversarial networks
It is significantly more challenging to extend the visibility factor to a higher depth during the development phase of a communication system for subterranean places. Even if there are numerous optical fiber systems that provide the right energy sources for intended panels, the visibility parameter is not optimized past a certain point. Therefore, the suggested method looks at the properties of a fiber optic communication system that is integrated with a certain energy source while having external panels. A regulating state is established in addition to characteristic analysis by minimizing the reflection index, and the integration of the general adversarial network (GAN) optimizes both central and layer formations in exterior panels. Thus, the suggested technique uses the external noise factor to provide relevant data to the control center via fiber optic shackles. As a result, the normalized error is smaller, boosting the suggested method's effectiveness in all subsurface areas. The created mathematical model is divided into five different situations, and the results are simulated using MATLAB to test the effectiveness of the anticipated strategy. Additionally, comparisons are done for each of the five scenarios, and it is found that the proposed fiber-optic method for energy sources is far more effective than current methodologies.
Optimal Feature Selection Based on Evolutionary Algorithm for Intrusion Detection
Since the past decades, internet usage has become inevitable due to its tremendous applications in various fields. Due to this huge usage of network, a lot of security problems arise. Intrusion detection system (IDS) monitors the network events and filters the abnormal activities. While monitoring events, large amount of data samples are collected from sensors and the features are extracted from raw data which are required for IDS classification. This selection of best features from the raw data can be performed by the optimal feature selection method. To compute the detection accuracy, SVM classifier is used. The proposed model is tested using KDD99 benchmark dataset. Compared to other machine learning algorithm, the proposed method produced better results.
Intelligent Intrusion Detection Algorithm Based on Multi-Attack for Edge-Assisted Internet of Things
The security concerns surrounding edge computing have been in the forefront as the technology has become increasingly popular. There is a larger need to move computations to edge servers as more and more IoT applications that take advantage of edge computing are developed. Security at the periphery of information transmission in the Internet of Things is essential. In this work we introduce a multi-attack IDS for edge-assisted IoT that combines the back propagation (BP) Neural Network with the Radial basis function (RBF) Neural Network. Specifically, we employ a BP neural network to spot outliers and zero down on the most important characteristics for each attack methodology. A neural network based on radial basis functions (RBF) is used to spot multi-attack intrusions. The findings demonstrate great accuracy in the given multiattack scenario, demonstrating the potential and efficiency of our proposed anomaly detection methodology.
Secured data transmissions in corporeal unmanned device to device using machine learning algorithm
Cyber–physical systems (CPS) for device-to-device (D2D) communications are gaining prominence in today’s sophisticated data transmission infrastructures. This research intends to develop a new model for UAV transmissions across distinct network nodes, which is necessary since an automatic monitoring system is required to enhance the current D2D application infrastructure. The real time significance of proposed UAV for D2D communications can be observed during data transmission state where individual data will have huge impact on maximizing the D2D security. Additionally, through the use of simulation, an exploratory persistence tool is offered for CPS networks with fully characterized energy issues. This UAV CPS paradigm is based on mobility nodes, which host concurrent systems and control algorithms. In sixth-generation networks, when there are no barriers and the collision rate is low and the connectivity is fast, the method is also feasible. Unmanned aerial vehicles (UAVs) can now cover great distances, even while encountering hazardous obstacles. When compared to the preexisting models, the simulated values for autonomous, collision, and parametric reliability are much better by an average of 87%. The proposed model, however, is shown to be highly independent and exhibits stable perceptual behaviour. The proposed UAV approach is optimal for real-time applications due to its potential for more secure operation via a variety of different communication modules.
Perception Exploration on Robustness Syndromes With Pre-processing Entities Using Machine Learning Algorithm
The majority of the current-generation individuals all around the world are dealing with a variety of health-related issues. The most common cause of health problems has been found as depression, which is caused by intellectual difficulties. However, most people are unable to recognize such occurrences in them, and no procedures for discriminating them from normal people have been created so far. Even some advanced technologies do not support distinct classes of individuals as language writing skills vary greatly across numerous places, making the central operations cumbersome. As a result, the primary goal of the proposed research is to create a unique model that can detect a variety of diseases in humans, thereby averting a high level of depression. A machine learning method known as the Convolutional Neural Network (CNN) model has been included into this evolutionary process for extracting numerous features in three distinct units. The CNN also detects early-stage problems since it accepts input in the form of writing and sketching, both of which are turned to images. Furthermore, with this sort of image emotion analysis, ordinary reactions may be easily differentiated, resulting in more accurate prediction results. The characteristics such as reference line, tilt, length, edge, constraint, alignment, separation, and sectors are analyzed to test the usefulness of CNN for recognizing abnormalities, and the extracted features provide an enhanced value of around 74%higher than the conventional models.
Deep Conviction Systems for Biomedical Applications Using Intuiting Procedures With Cross Point Approach
The production, testing, and processing of signals without any interpretation is a crucial task with time scale periods in today's biological applications. As a result, the proposed work attempts to use a deep learning model to handle difficulties that arise during the processing stage of biomedical information. Deep Conviction Systems (DCS) are employed at the integration step for this procedure, which uses classification processes with a large number of characteristics. In addition, a novel system model for analyzing the behavior of biomedical signals has been developed, complete with an output tracking mechanism that delivers transceiver results in a low-power implementation approach. Because low-power transceivers are integrated, the cost of implementation for designated output units will be decreased. To prove the effectiveness of DCS feasibility, convergence and robustness characteristics are observed by incorporating an interface system that is processed with a deep learning toolbox. They compared test results using DCS to prove that all experimental scenarios prove to be much more effective for about 79 percent for variations with time periods.
Predicting Epidemic Outbreaks Using IOT, Artificial Intelligence and Cloud
All COVID-19 affected countries putting their efforts to deal with the outspread of this death-dealing disease in terms of infrastructure, economics, medical treatments and many other resources. Nowadays, there are number of coronavirus analysis and prediction models are available to make decisions and to informed, aware people. But, absence of necessary data, these models are not able to show precise values. Based on the datasets, reports and on account of the uniform nature of the coronavirus and variations in its behaviour from place-to-place, this study recommend ML as well as deep learning as worthwhile tool to model the outbreak. To come up with for the well-being of living society, we prefer to utilize the ML and deep learning models with the focus for understanding its everyday exponential behaviour in addition to the prediction graphs of further growth of the COVID-2019 over the world by utilizing the available facts and dataset.
Detection of superfluous in channels using data fusion with wireless sensors and fuzzy interface algorithm
Without integrating various sensor data sets, sensing information in the presence of leakage for large-scale pipeline systems is very challenging. A data fusion methodology, wherein more sensor data is merged to give relevant information, is necessary to transform the challenging process into a straightforward step-by-step operation. Ultrasonic sensors are used in stage 1 to identify any ambiguities in pipeline systems, and various sites are used to gauge the rate of leak detection. As a result, a novel model for estimating various types of gas leakage in pipeline systems is examined, put to the test, and contrasted. Five distinct scenarios are seen during the leakage testing procedure using data fusion, where the optimization is done using the fuzzy interface technique. This integration procedure detects leakage rates with high accuracy, and in every test instance, the best outcomes are obtained. Additionally, the predicted model can be used in real-time with a low failure rate of numerous sensors, with MATLAB being used to simulate the results.
An Intelligent Security Framework Based on Collaborative Mutual Authentication Model for Smart City Networks
With the advent of smart city networks and increased utilization of vehicles, the Internet of Vehicles (IoV) has attracted more attention from researchers. But, providing security to this type of network is one of the challenging and demanding tasks in the present day. For this purpose, the conventional works developed many networking frameworks and methodologies to enhance the privacy and security of smart city systems. Still, it has the significant limitations of high complexity in algorithm design, requires more time consumption for processing, reduced maintenance, and does not require proper authentication verification. Therefore, the primary purpose of this work is to develop a new security model for smart city networks using a combination of methodologies. Here, the Collaborative Mutual Authentication (CMA) mechanism is used to validate the identity of users based on the private key, public key, session key, and generated hash function. In addition, the Meta-heuristic Genetic Algorithm – Random Forest (MGA-RF) technique is deployed to detect the attacks in the network, ensuring the security of the smart city. During an evaluation, the proposed authentication-based security mechanism’s performance is validated using various parameters, and the results are compared with the recent state-of-the-art models.
Construal Attacks on Wireless Data Storage Applications and Unraveling Using Machine Learning Algorithm
Cloud services are a popular concept used to describe how internet-based services are delivered and maintained. The computer technology environment is being restructured with respect to information preservation. Data protection is of critical importance when storing huge volumes of information. In today’s cyber world, an intrusion is a significant security problem. Services, information, and services are all vulnerable to attack in the cloud due to its distributed structure of the cloud. Inappropriate behavior in the connection and in the host is detected using intrusion detection systems (IDS) in the cloud. DDoS attacks are difficult to protect against since they produce massive volumes of harmful information on the network. This assault forces the cloud services to become unavailable to target consumers, which depletes computer resources and leaves the provider exposed to massive financial and reputational losses. Cyber-analyst data mining techniques may assist in intrusion detection. Machine learning techniques are used to create many strategies. Attribute selection techniques are also vital in keeping the dataset’s dimensionality low. In this study, one method is provided, and the dataset is taken from the NSL-KDD dataset. In the first strategy, a filtering method called learning vector quantization (LVQ) is used, and in the second strategy, a dimensionality-simplifying method called PCA. The selected attributes from each technique are used for categorization before being tested against a DoS attack. This recent study shows that an LVQ-based SVM performs better than the competition in detecting threats.
A Proficient ZESO-DRKFC Model for Smart Grid SCADA Security
Smart grids are complex cyber-physical systems that incorporate smart devices’ communication capabilities into the grid to enable remote management and the control of power systems. However, this integration reveals numerous SCADA system flaws, which could compromise security goals and pose severe cyber threats to the smart grid. In conventional works, various attack detection methodologies are developed to strengthen the security of smart grid SCADA systems. However, they have several issues with complexity, slow training speed, time consumption, and inaccurate prediction outcomes. The purpose of this work is to develop a novel security framework for protecting smart grid SCADA systems against harmful network vulnerabilities or intrusions. Therefore, the proposed work is motivated to develop an intelligent meta-heuristic-based Artificial Intelligence (AI) mechanism for securing IoT-SCADA systems. The proposed framework includes the stages of dataset normalization, Zaire Ebola Search Optimization (ZESO), and Deep Random Kernel Forest Classification (DRKFC). First, the original benchmarking datasets are normalized based on content characterization and category transformation during preprocessing. After that, the ZESO algorithm is deployed to select the most relevant features for increasing the training speed and accuracy of attack detection. Moreover, the DRKFC technique accurately categorizes the normal and attacking data flows based on the optimized feature set. During the evaluation, the performance of the proposed ZESO-DRKFC method is validated and compared in terms of accuracy, detection rate, f1-score, and false acceptance rate. According to the results, it is observed that the ZESO-DRKFC mechanism outperforms other techniques with high accuracy (99%) by precisely spotting intrusions in the smart grid systems.
Exploration of Despair Eccentricities Based on Scale Metrics with Feature Sampling Using a Deep Learning Algorithm
The majority of people in the modern biosphere struggle with depression as a result of the coronavirus pandemic’s impact, which has adversely impacted mental health without warning. Even though the majority of individuals are still protected, it is crucial to check for post-corona virus symptoms if someone is feeling a little lethargic. In order to identify the post-coronavirus symptoms and attacks that are present in the human body, the recommended approach is included. When a harmful virus spreads inside a human body, the post-diagnosis symptoms are considerably more dangerous, and if they are not recognised at an early stage, the risks will be increased. Additionally, if the post-symptoms are severe and go untreated, it might harm one’s mental health. In order to prevent someone from succumbing to depression, the technology of audio prediction is employed to recognise all the symptoms and potentially dangerous signs. Different choral characters are used to combine machine-learning algorithms to determine each person’s mental state. Design considerations are made for a separate device that detects audio attribute outputs in order to evaluate the effectiveness of the suggested technique; compared to the previous method, the performance metric is substantially better by roughly 67%.
A Classy Multifacet Clustering and Fused Optimization Based Classification Methodologies for SCADA Security
Detecting intrusions from the supervisory control and data acquisition (SCADA) systems is one of the most essential and challenging processes in recent times. Most of the conventional works aim to develop an efficient intrusion detection system (IDS) framework for increasing the security of SCADA against networking attacks. Nonetheless, it faces the problems of complexity in classification, requiring more time for training and testing, as well as increased misprediction results and error outputs. Hence, this research work intends to develop a novel IDS framework by implementing a combination of methodologies, such as clustering, optimization, and classification. The most popular and extensively utilized SCADA attacking datasets are taken for this system’s proposed IDS framework implementation and validation. The main contribution of this work is to accurately detect the intrusions from the given SCADA datasets with minimized computational operations and increased accuracy of classification. Additionally the proposed work aims to develop a simple and efficient classification technique for improving the security of SCADA systems. Initially, the dataset preprocessing and clustering processes were performed using the multifacet data clustering model (MDCM) in order to simplify the classification process. Then, the hybrid gradient descent spider monkey optimization (GDSMO) mechanism is implemented for selecting the optimal parameters from the clustered datasets, based on the global best solution. The main purpose of using the optimization methodology is to train the classifier with the optimized features to increase accuracy and reduce processing time. Moreover, the deep sequential long short term memory (DS-LSTM) is employed to identify the intrusions from the clustered datasets with efficient data model training. Finally, the proposed optimization-based classification methodology’s performance and results are validated and compared using various evaluation metrics.
Distinctive Measurement Scheme for Security and Privacy in Internet of Things Applications Using Machine Learning Algorithms
More significant data are available thanks to the present Internet of Things (IoT) application trend, which can be accessed in the future using some platforms for data storage. An external storage space is required for practical purposes whenever a data storage platform is created. However, in the IoT, certain cutting-edge storage methods have been developed that compromise the security and privacy of data transfer processes. As a result, the suggested solution creates a standard mode of security operations for storing the data with little noise. One of the most distinctive findings in the suggested methodology is the incorporation of machine learning algorithms in the formulation of analytical representations. The aforementioned integration method ensures high-level quantitative measurements of data security and privacy. Due to the transmission of large amounts of data, users are now able to assess the reliability of data transfer channels and the duration of queuing times, where each user can separate the specific data that has to be transferred. The created system is put to the test in real time using the proper metrics, and it is found that machine learning techniques improve security more effectively. Additionally, for 98 percent of the scenarios defined, the accuracy for data security and privacy is maximized, and the predicted model outperforms the current method in all of them.
Interaction of Secure Cloud Network and Crowd Computing for Smart City Data Obfuscation
There can be many inherent issues in the process of managing cloud infrastructure and the platform of the cloud. The platform of the cloud manages cloud software and legality issues in making contracts. The platform also handles the process of managing cloud software services and legal contract-based segmentation. In this paper, we tackle these issues directly with some feasible solutions. For these constraints, the Averaged One-Dependence Estimators (AODE) classifier and the SELECT Applicable Only to Parallel Server (SELECT-APSL ASA) method are proposed to separate the data related to the place. ASA is made up of the AODE and SELECT Applicable Only to Parallel Server. The AODE classifier is used to separate the data from smart city data based on the hybrid data obfuscation technique. The data from the hybrid data obfuscation technique manages 50% of the raw data, and 50% of hospital data is masked using the proposed transmission. The analysis of energy consumption before the cryptosystem shows the total packet delivered by about 71.66% compared with existing algorithms. The analysis of energy consumption after cryptosystem assumption shows 47.34% consumption, compared to existing state-of-the-art algorithms. The average energy consumption before data obfuscation decreased by 2.47%, and the average energy consumption after data obfuscation was reduced by 9.90%. The analysis of the makespan time before data obfuscation decreased by 33.71%. Compared to existing state-of-the-art algorithms, the study of makespan time after data obfuscation decreased by 1.3%. These impressive results show the strength of our methodology.
Prevention of Cyber Security with the Internet of Things Using Particle Swarm Optimization
High security for physical items such as intelligent machinery and residential appliances is provided via the Internet of Things (IoT). The physical objects are given a distinct online address known as the Internet Protocol to communicate with the network’s external foreign entities through the Internet (IP). IoT devices are in danger of security issues due to the surge in hacker attacks during Internet data exchange. If such strong attacks are to create a reliable security system, attack detection is essential. Attacks and abnormalities such as user-to-root (U2R), denial-of-service, and data-type probing could have an impact on an IoT system. This article examines various performance-based AI models to predict attacks and problems with IoT devices with accuracy. Particle Swarm Optimization (PSO), genetic algorithms, and ant colony optimization were used to demonstrate the effectiveness of the suggested technique concerning four different parameters. The results of the proposed method employing PSO outperformed those of the existing systems by roughly 73 percent.
Handling tactful data in cloud using PKG encryption technique
With the availability of the cloud storage services, users can store data to the cloud and have the access to share the data to others. The cloud file may contain some sensitive information, this information shouldn't be exposed to others. The possible solution could be to encrypt the whole file which is shared before sending it to the cloud and to generate the signature to verify the encrypted file. Then, upload the encrypted file and signatures to the cloud. This enables the sensitive information to be private and only the owner can decrypt the file. Hence, it is not feasible to use by others. Distributing the decryption is the possible way to access it by others. However, it is not the efficient solution to implement this method in real scenarios. In order to overcome this approach, an efficient solution is implemented. In this new approach, sensitive information is kept private and can easily execute without any complications. Here, a sanitizer is used to sanitize the block which contains the sensitive information of the file. Firstly, the user binds the data blocks with respect to the sensitive information of the original data and generates a digital signature to that block and then sends it to the sanitizer. The sanitizer filters these data blocks and also sanitizes the data corresponding to the sensitive information. It also evaluates the corresponding signatures into valid ones. The signature is used to prove that the cloud truly possesses the data blocks. The third-party auditor is a public verifier, it verifies the data stored in the cloud on behalf of the user. This method supports the data sharing so that the sensitive information is protected in the cloud.
A security model for smart grid SCADA systems using stochastic neural network
Detection of cyber-threats in the smart grid Supervisory Control and Data Acquisition (SCADA) is still remains one of the complex and essential processes need to be highly concentrated in present times. Typically, the SCADA is more prone to the security issues due to their environmental problems and vulnerabilities. Therefore, the proposed work intends to design a new detection approach by integrating the optimization and classification models for smart grid SCADA security. In this framework, the min-max normalization is performed at first for noise removal and attributes arrangement. Here, the correlation estimation mechanism is mainly deployed to reduce the dimensionality of features by choosing the relevant features used for attack prediction. Moreover, the optimal features are selected by using the optimal solution provided by the Holistic Harris Hawks Optimization (H3O). Finally, the Perceptron Stochastic Neural Network (PSNN) is utilized to categorize the normal and attacking data flow in the network with minimal processing time and complexity. By using the combination of proposed H3O-PSNN technique, the detection accuracy is improved up to 99% for all datasets used in this study, and also other measures such as precision to 99.2%, recall to 99%, f1-score to 99.2% increased, when compared to the standard techniques.
Efficient data transmission on wireless communication through a privacy-enhanced blockchain process
In the medical era, wearables often manage and find the specific data points to check important data like resting heart rate, ECG voltage, SPO2, sleep patterns like length, interruptions, and intensity, and physical activity like kind, duration, and levels. These digital biomarkers are created mainly through passive data collection from various sensors. The critical issues with this method are time and sensitivity. We reviewed the newest wireless communication trends employed in hospitals using wearable technology and privacy and Block chain to solve this problem. Based on sensors, this wireless technology controls the data gathered from numerous locations. In this study, the wearable sensor contains data from the various departments of the system. The gradient boosting method and the hybrid microwave transmission method have been proposed to find the location and convince people. The patient health decision has been submitted to hybrid microwave transmission using gradient boosting. This will help to trace the mobile phones using the calls from the threatening person, and the data is gathered from the database while tracing. From this concern, the data analysis process is based on decision-making. They adapted the data encountered by the detailed data in the statistical modeling of the system to produce exploratory data analysis for satisfying the data from the database. Complete data is classified with a 97% outcome by removing unwanted data and making it a 98% successful data classification.
An archetypal determination of mobile cloud computing for emergency applications using decision tree algorithm
Numerous users are experiencing unsafe communications due to the growth of big network mediums, where no node communication is detected in emergency scenarios. Many people find it difficult to communicate in emergency situations as a result of such communications. In this paper, a mobile cloud computing procedure is implemented in the suggested technique in order to prevent such circumstances, and to make the data transmission process more effective. An analytical framework that addresses five significant minimization and maximization objective functions is used to develop the projected model. Additionally, all mobile cloud computing nodes are designed with strong security, ensuring that all the resources are allocated appropriately. In order to isolate all the active functions, the analytical framework is coupled with a machine learning method known as Decision Tree. The suggested approach benefits society because all cloud nodes can extend their assistance in times of need at an affordable operating and maintenance cost. The efficacy of the proposed approach is tested in five scenarios, and the results of each scenario show that it is significantly more effective than current case studies on an average of 86%.
Ensemble Learning by High-Dimensional Acoustic Features for Emotion Recognition from Speech Audio Signal
In the recent past, handling the high dimensionality demonstrated in the auditory features of speech signals has been a primary focus for machine learning (ML-)based emotion recognition. The incorporation of high-dimensional characteristics in training datasets in the learning phase of ML models influences contemporary approaches to emotion prediction with significant false alerting. The curse of the excessive dimensionality of the training corpus is addressed in the majority of contemporary models. Modern models, on the other hand, place a greater emphasis on merging many classifiers, which can only increase emotion recognition accuracy even when the training corpus contains high-dimensional data points. “Ensemble Learning by High-Dimensional Acoustic Features (EL-HDAF)” is an innovative ensemble model that leverages the diversity assessment of feature values spanned over diversified classes to recommend the best features. Furthermore, the proposed technique employs a one-of-a-kind clustering process to limit the impact of high-dimensional feature values. The experimental inquiry evaluates and compares emotion forecasting using spoken audio data to current methods that use machine learning for emotion recognition. Fourfold cross-validation is used for performance analysis with the standard data corpus.
An Efficient Mechanism for Deep Web Data Extraction Based on Tree-Structured Web Pattern Matching
The World Wide Web comprises of huge web databases where the data are searched using web query interface. Generally, the World Wide Web maintains a set of databases to store several data records. The distinct data records are extracted by the web query interface as per the user requests. The information maintained in the web database is hidden and retrieves deep web content even in dynamic script pages. In recent days, a web page offers a huge amount of structured data and is in need of various web-related latest applications. The challenge lies in extracting complicated structured data from deep web pages. Deep web contents are generally accessed by the web queries, but extracting the structured data from the web database is a complex problem. Moreover, making use of such retrieved information in combined structures needs significant efforts. No further techniques are established to address the complexity in data extraction of deep web data from various web pages. Despite the fact that several ways for deep web data extraction are offered, very few research address template-related issues at the page level. For effective web data extraction with a large number of online pages, a unique representation of page generation using tree-based pattern matches (TBPM) is proposed. The performance of the proposed technique TBPM is compared to that of existing techniques in terms of relativity, precision, recall, and time consumption. The performance metrics such as high relativity is about 17-26% are achieved when compared to FiVaTech approach.
Design of Soccer League Optimization Based Hybrid Controller for Solar-Battery Integrated UPQC
Nowadays, integration of renewable energy source like solar, wind etc in the grid is encouraged to reduce the losses and meet the demand. However, the integration of these renewable sources, power electronic devices, non-linear and un-balanced loads leads to the power quality issues this motivated power researchers for the development of new controllers and techniques. This paper develops a soccer-league algorithm based optimal tuned hybrid controller for the unified power quality conditioner associated with the solar power and battery-storage systems with the Boost converter and Buck Boost converter. The UPQC simultaneously performs both the functions of Shunt active power filter and series active power filter. The proposed optimally designed controller adapts both the properties of fuzzy logic-controller and SOL algorithm tuned proportional-integral controllers. The Kp, Ki values of shunt and series controllers are treated as control variables, which are optimally tuned by SOL to satisfy the objective function. The key contributions of the proposed work are the reduction of total harmonics in current waveforms thereby enhancing the power factor, quick action to maintain the constant DC-Link capacitor voltage during the solar irradiation variations, elimination of voltage sag/swell/large disturbance, and appropriate compensation for the un-balanced networks and loads. The performance investigation of SLOHC was carried-out with four test studies for different combinations of unbalanced/balanced loads and supply voltage of 3-phase distribution network. Comparative analysis was carried out with those of standard methods like a genetic algorithm, biogeography-based optimization, and proportional-integral controllers. The proposed method reduces the total harmonic distortion to 2.06%, 2.44%, 2.40%, and 2.32% which are much lower than those of existing methods available in literature. The design has been performed on MATLAB/simulink software.
FogDedupe: A Fog-Centric Deduplication Approach Using Multi-Key Homomorphic Encryption Technique
The advancements in communication technologies and a rapid increase in the usage of IoT devices have resulted in an increased data generation rate. Storing, managing, and processing large quantities of unstructured data generated by IoT devices remain a huge challenge to cloud service providers (CSP). To reduce the storage overhead, CSPs implement deduplication algorithms on the cloud storage servers. It identifies and eliminates the redundant data blocks. However, implementing post-progress deduplication schemes does not address the bandwidth issues. Also, existing convergent key-based deduplication schemes are highly vulnerable to confirmation of file attacks (CFA) and can leak confidential information. To overcome these issues, FogDedupe, a fog-centric deduplication framework, is proposed. It performs source-level deduplication on the fog nodes to reduce the bandwidth usage and post-progress deduplication to improve the cloud storage efficiency. To perform source-level deduplication, a distributed index table is created and maintained in the fog nodes, and post-progress deduplication is performed using a multi-key homomorphic encryption technique. To evaluate the proposed FogDedupe framework, a testbed environment is created using the open-source Eucalyptus v.4.2.0 software and fog project v1.5.9 package. The proposed scheme tightens the security against CFA attacks and improves the storage overhead by 27% and reduces the deduplication latency by 12%.
Prophetic Energy Assessment with Smart Implements in Hydroelectricity Entities Using Artificial Intelligence Algorithm
An encouraging development is the quick expansion of renewable energy extraction. Harnessing renewable energy is economically feasible at the current rate of technological advancement. Traditional energy sources, such as coal, petroleum, and hydrocarbons, which have negative effects on the environment, are coming under more social and financial pressure. Companies need more solar and wind power because this calls for a well-balanced mix of renewable resources and a higher proportion of alternative energy sources. Sustainable energy can be captured using a variety of techniques. Massive scale and small-sized are the two most prevalent techniques. No renewable energy source possesses an inherent property that restricts how it may be managed or how it can be planned to produce electricity. A number of factors have contributed to a growth in the use of alternative sources, one of which is to mitigate the effects of rising temperatures. To improve the ability to estimate renewable energy, various modeling approaches have been created. This region might use an HRES to give many sources with the inclusion of different energy sources. The inventiveness of solar and wind power and the brilliant ability of neural networks to handle complex time-series data signals have both aided in the prediction of sustainable energy. Therefore, this research will examine the numerous information models in order to determine which proposed models can provide accurate projections of renewable energy output, such as sunlight, wind, or pumped storage. In the fields of sustainable energy predictions, a number of machine learning methods, such as multilayer perceptions MLP, RNN CNN, and LSTM designs, are frequently utilized. This form of modeling uses historical data to predict potential values and can predict short-term patterns in solar and wind generation.
Empirical Compression Features of Mobile Computing and Data Applications Using Deep Neural Networks
Due to the enormous data sizes involved in mobile computing and multimedia data transfer, it is possible that more data traffic may be generated, necessitating the use of data compression. As a result, this paper investigates how mobile computing data are compressed under all transmission scenarios. The suggested approach integrates deep neural networks (DNN) at high weighting functionalities for compression modes. The proposed method employs appropriate data loading and precise compression ratios for successful data compression. The accuracy of multimedia data that must be conveyed to various users is higher even though compression ratios are higher. The same data are transferred at significantly higher compression ratios, which save time while also minimizing data mistakes that may occur at the receiver. The DNN process also includes a visible parameter for handling high data-weight situations. The visible parameter optimizes the data results, allowing simulation tools to readily observe the compressed data. A comparison case study was created for five different scenarios in order to confirm the results, and it shows that the suggested strategy is significantly more effective than existing methods in roughly 63 percent of the cases.
Mathematical Framework for Wearable Devices in the Internet of Things Using Deep Learning
To avoid dire situations, the medical sector must develop various methods for quickly and accurately identifying infections in remote regions. The primary goal of the proposed work is to create a wearable device that uses the Internet of Things (IoT) to carry out several monitoring tasks. To decrease the amount of communication loss as well as the amount of time required to wait before detection and improve detection quality, the designed wearable device is also operated with a multi-objective framework. Additionally, a design method for wearable IoT devices is established, utilizing distinct mathematical approaches to solve these objectives. As a result, the monitored parametric values are saved in a different IoT application platform. Since the proposed study focuses on a multi-objective framework, state design and deep learning (DL) optimization techniques are combined, reducing the complexity of detection in wearable technology. Wearable devices with IoT processes have even been included in current methods. However, a solution cannot be duplicated using mathematical approaches and optimization strategies. Therefore, developed wearable gadgets can be applied to real-time medical applications for fast remote monitoring of an individual. Additionally, the proposed technique is tested in real-time, and an IoT simulation tool is utilized to track the compared experimental results under five different situations. In all of the case studies that were examined, the planned method performs better than the current state-of-the-art methods.
Optimized PV fed sensorless BLDC motor control system using Q-recurrent adaptive controller and Levy-enhanced circular search mechanisms
Sensorless BLDC motor speed and torque control find a wide range of applications in electric vehicles, renewable power systems, industrial automation, and other places where efficient and reliable operation is required. Most of the conventional control techniques that have been traditionally used are the PID and ANFIS controllers, which generally bear limitations in performance for varying load conditions, complexity, and sensitivity to parameter variations. The goal of the paper presented in this context is to develop a smart controller for the Solar PV-fed sensorless BLDC motor. It combines a Q-Recurrent Adaptive Motor Controller (Q-RAMC) controlling mechanism with Levy-Enhanced Circular Search (LECS) for mapping function estimation. The novelty in this work rests on the combined effect of a Solar PV system with advanced control techniques for sensorless BLDC motor operation. Indeed, the developed method outperforms several existing control methods, hence yielding superior results in the form of smaller torque variations, smoother speed profile, and improved dynamic response. The simulation results show that the proposed approach significantly enhances motor control and outperforms conventional PID and ANFIS controllers. The system reduces torque ripple by 3.10%, which allows a smoother torque to be delivered, while overall efficiency reaches up to 99% that is an excellent energy utilization. Also, the speed transient time decreases by 1.5 s and the rise time shortens by 0.5 s that indicates the faster dynamic response and the better control precision.
Plant disease detection using a hybrid dilated CNN with attention mechanisms and optimized mask RCNN segmentation
In accordance with human life, agriculture has main role in it, and in addition to that most people are involved in some kind of agricultural activity either in a direct or indirect manner. Moreover, the agricultural sectors acquired a major role in supplying better quality food and thus made the greatest attribution to the growth of populations and economics. But, the disease over the crop has influenced the growth of the corresponding species and thus requires an earlier diagnosis of plant disease by utilizing the most adequate and automatic detection approach for improving the quality of the production of food as well as to reduce the loss in economic. But, there are no techniques in the conventional system for identifying the disease in diverse crops in the agricultural environment. In modern times, deep learning approaches have acquired tremendous enhancement in the identification of image categorization as well as the object detection system. For precise detection of plant disease, an improved classification model is developed. Initially, from the standard publicly available database, the images of the plants are aggregated. The gathered images are segmented using Dilated, Adaptive, and Attention-based Mask Recurrent Convolutional Neural Networks (DAA-MRCNN). Then, it is fed into a hybrid classification phase, where the new model namely Dilated, Adaptive, and Attention-based Multiscale DenseNet termed as (DAA-MDeNet) for classification. The classifier performance is improved by optimizing the parameter in Mask RCNN and Multiscale DenseNet using the hybrid optimization algorithm named African Vulture and Lemur Optimizer (AVLO). When compared with the other model, a superior performance is shown in the proposed model.
In this paper the need of biometric authentication with synthetic data is analyzed for increasing the security of data in each transmission systems. Since more biometric patterns are represented the complexity of recognition changes where low security features are enabled in transmission process. Hence the process of increasing security is carried out with image biometric patterns where synthetic data is created with explainable artificial intelligence technique thereby appropriate decisions are made. Further sample data is generated at each case thereby all changing representations are minimized with increase in original image set values. Moreover the data flows at each identified biometric patterns are increased where partial decisive strategies are followed in proposed approach. Further more complete interpretabilities that are present in captured images or biometric patterns are reduced thus generated data is maximized to all end users. To verify the outcome of proposed approach four scenarios with comparative performance metrics are simulated where from the comparative analysis it is found that the proposed approach is less robust and complex at a rate of 4% and 6% respectively.
Digital Twin and IoT for Smart City Monitoring
This study involves the integration of wireless technologies to enable the creation of digital twin representations for the purpose of monitoring smart cities. The analytical representations are created through the utilization of advanced technological resources, facilitating a comparison between the original values and reference values. Consequently, the time step index is accompanied by error measurements that have been reduced, in cases where a greater number of active messages are present, along with the creation of twins that are transmitted in a secure manner. The primary objective of digital twins within the context of smart cities is to enhance economic progress and facilitate prompt decision-making with respect to situational awareness. To this end, the Internet of Things (IoT) is leveraged for the purpose of monitoring and recording states. Furthermore, the integration of Constrained Application Protocol (CoAP) with digital twin serves to reduce the frequency of packet exchange and retransmission, thereby enhancing the success rate to 97%. The test results include a comparative analysis of five different scenarios, which demonstrate that the proposed method yields a reduction in the performance of inactive twins to less than 1% when compared to the existing approach.
This article examines the operational functionality of intelligent transport systems to enhance smart cities by reducing traffic congestion. Given the increasing populations of smart cities, there is a growing demand for public transit systems to address the issue of traffic congestion. Therefore, the suggested system is developed using a few parametric design models, which combine point-to-point protocol and mode control optimization. The multi-objective parametric design for a smart transportation system is conducted using min–max functions to minimize the waiting time period for end users. Furthermore, customers are given the option to utilize a line following mechanism that offers suitable connectivity, along with independent identification and revitalize functions. The predicted model effectively eliminates the delay produced by transportation devices when positioning units are involved, ensuring that individual messages are delivered without any interruptions. In order to evaluate the results of the proposed system model, four different scenarios were examined. A comparison analysis revealed that the suggested method achieves a suitable directional flow for 96% of smart transport units. Additionally, it reduces delays and waiting periods by 2% and 6% respectively, while increasing energy consumption by 29%.
The proliferation of integrated sensing techniques in Sixth Generation (6G) networks is an increasingly significant aspect in facilitating efficient end-to-end communication for all users. The suggested methodology employs a digital signal processed with terahertz bandwidth to assess the impact of 6G networks. The primary focus lies in the design of 6G networks, emphasizing key parameters such interference, loss, signal strength, signal-to-noise ratio, and dual band channels. The aforementioned factors are combined with two machine learning algorithms in order to determine the extent of spectrum sharing among all available resources. Thus suggested approach for detecting signals in the terahertz communication spectrum is evaluated using 10 devices across four situations, which involve interference, signal loss, strength, and time margins for integrated sensing. Also the assumptions are based on signal processing devices operating within millimeter waves ranging from 5 to 10 terahertz. Interference and losses in the specified spectrum are seen to be less than 1%, but the time margin for integrated sensing with 99% maximized signal intensity remains at 85%.
“Palisade”—A Student Friendly Social Media Website
In this application, we have developed a website where users can sign up and login and be able to share their thoughts and events at their colleges or companies and experience at interviews or etc. and also like, comment on respective posts, and also user can able to follow or unfollow a user to get updates from different users. We feel very difficult to find blood donors at difficult times, this website provides a list of blood donors with their details such that it will be very helpful in finding blood donors. It is very helpful to people who require blood urgently. There is a good number of donors but we find it difficult to find them. This website solves this issue. We can create chat rooms or join chat rooms and discuss things with people around the world in this application.
An Appraisal of Cyber-Attacks and Countermeasures Using Machine Learning Algorithms
In this computerized era, cyber-attacks have turned quite common. Every year, the number of cyber-attacks escalates, and so does the austerity of the harm. In today’s digital environment, ensuring security against cyber-attacks has become important. Networking is becoming more sophisticated over time, and as the popularity of a successful technology grows, intrusion detection system security issues grow as well. There is a strong necessity for a solid defense in today’s cyber world. New attacks and malware pose a great challenge to the security community. Various machine learning techniques are being used in many intrusion detection systems to counter such attacks. Machine learning can learn on its own with minimal human interaction. Hence, it is vital to call for further attention to security concerns and associated machine learning defensive strategies, which inspires this paper’s complete survey. A thorough survey on diverse machine learning algorithms has been investigated in this paper to determine which algorithm is best suited for a specific attack; these techniques have been examined and compared in terms of their accuracy in detecting attacks.
A Comprehensive Study on Eucalyptus, Open Stack and Cloud Stack
Cloud computing remains the discussing topic in the field of deploying applications, data storage, improvising operational efficiency, cost savings, and high performance. Incremented storage capacity and automation suppleness, flexibility, and scalability are the key factors in cloud computing. Choosing an appropriate cloud platform can be very difficult and every platform can have advantages and disadvantages. So, in our research, we are comparing the attributes of open stack and cloud stack. This platform comparison intends to compare the efficiency and usage of the cloud platforms. Then, the eucalyptus, cloud stack, and open stack come under the cloud platform to perform efficiently with high scalability. These platforms can go hybrid with AWS to develop applications. The comparison between this cloud platforms is to summarize the scalability and services. Eucalyptus, open stack, and cloud stack are all reopen-source cloud computing software platforms and this paper heavily focuses on their pros and cons in terms of efficiency, storage, and usage parameters. Conservation of user data should be more cost-efficient in the cloud and these tools comparison would highly assist the users to have an eagle view over the major cloud computing platforms.
A Novel Trust Evaluation and Reputation Data Management Based Security System Model for Mobile Edge Computing Network
In recent years, Mobile Edge Computing (MEC) has arisen as a new computing platform that pushes computational power to the edge of the Internet and close to end users. Additionally, a rising number of scholars are performing various sorts of research within the framework of edge devices. The security risks of resource consumers are elevated since edge computing frequently lacks a centralized security mechanism, in contrast to cloud computing. Therefore, in this research, we focus on creating a reliable trust evaluation mechanism to overcome security concerns for enabling MEC successfully. As a result of their limited capacity for data storage and processing, these devices help pave the way for the emerging edge computing paradigm. Reputation data is processed locally on edge devices, with just the necessary data being transferred to the Cloud, which improves reliability and reduces the load on the network as a whole. There is a lack of trust amongst devices in the IoT because of the inherent security threats and assaults they face. To mitigate this threat, we offer a lightweight trust management model to oversee a device’s trustworthiness and the trustworthiness of its service levels, as well as the quality of service those levels provide (QoS). In order to determine an aggregate level of trust, the model uses QoS characteristics as weights to evaluate the trust of devices. The enhanced outcomes of QoS-parameterized trust management models suggest that they may be useful for detecting malicious edge nodes in edge computing networks, which would have practical applications in industry.
Prognosis of urban environs using time series analysis for preventing overexploitation using artificial intelligence
In the process of urban environment, the optimisation of network enactment is shifted from operation to maintenance and monitoring stage. During such conversion it is necessary to indicate the time series representation for preventing the overexploitation problem that happens due to more number of natural resources. It is necessary to use a set of historical data to check the behaviour of current state operations at varying time periods using an intelligent optimiser. Thus this study explores the implementation of time series analysis using artificial intelligence (AI) where accurate predictions are made in the entire urban environment even with big edifices. The major difference that is observed in the proposed method as compared to existing method is that two different boundary regions are chosen with distinct point values and only in two directions the monitoring device is installed. Since AI is involved in the entire process entire characteristics on forecasting current state procedure is represented using modified evolutionary optimisation (MEO) which observes entire biological nature of neighbouring environs. Additionally comparison analysis is made using MATLAB with five case studies where the proposed method proves to be much effective for about 70% as compared to existing models.
Future transportation computing model with trifold algorithm for real-time multipath networks
Purpose: In the past ten years, research on Intelligent Transportation Systems (ITS) has advanced tremendously in everyday situations to deliver improved performance for transport networks. To prevent problems with vehicular traffic, it is essential that alarm messages be sent on time. The truth is that an ITS system in and of itself could be a feature of a vehicular ad hoc network (VANET), which is a wireless network extension. As a result, a previously investigated path between two nodes might be destroyed over a short period of time. Design: The Time delay-based Multipath Routing (TMR) protocol is presented in this research which efficiently determines a route that is optimal for delivering packets to the target vehicle with the least amount of time delay. Using the TMR method, data flow is reduced, especially for daily communication. As a result, there are few packet retransmissions. Findings: To demonstrate how effective the suggested protocol is, several different protocols, including AOMDV, FF-AOMDV, EGSR, QMR, and ISR, have been used to evaluate the TMR. Simulation outcomes show how well our suggested approach performs when compared to alternative methods. Originality: Our method would accomplish two objectives as a consequence. First, it would increase the speed of data transmission, quickly transfer data packets to the target vehicle, especially warning messages, and prevent vehicular issues like automobile accidents. Second, to relieve network stress and minimize network congestion and data collisions.
A new probabilistic relevancy classification (PRC) based intrusion detection system (IDS) for SCADA network
Detecting and identifying intrusions in a network is a challenging research area in network security domain. Intrusion detection plays an essential role in computer network security since long. An Intrusion Detection System (IDS) is mainly used to detect an unauthorized access to a computer system or network. Moreover, it is capable to detect all types of malicious and harmful attacks in a network. The drawbacks of existing IDS are, it can detect only the known attacks, it produces a large number false alarms due to the unpredictable behavior of users and networks. It also requires extensive training sets in order to characterize the normal behavior of the nodes. In order to overcome these issues, an integration of Hidden Markov Model (HMM) - Relevance Vector Machine RVM) algorithm namely, Probabilistic Relevance Classification (PRC) is proposed to detect intrusions in Supervisory Control and Data Acquisition (SCADA) network. Here, the power system attack dataset is used to detect the attacks in a SCADA network. In the preprocessing stage, the given data is preprocessed to segregate the relays as R1, R2, R3 and R4. Each relay contains the date, timestamp, control panel log report, relay log report, snort log report, marker, fault location and load condition information. Then, the Boyer Moore (BM) technique is employed to perform the string matching operation. After that, the PRC technique is implemented to classify the attack as known or unknown. The novelty of this paper is, it manually trains the data and features for unknown attacks. The main intention of this work is to reduce the set of features, amount of database and to increase the detection rate. The experimental results evaluate the performance in terms of False Acceptance Rate (FAR), False Rejection Rate (FRR), Genuine Acceptance Rate (GA), sensitivity, specificity, accuracy, error rate and recall.
An appraisal on security challenges and countermeasures in smart grid
The smart grid is defined as the two way power grid that provides easy monitoring and maintenance with self-healing capability for the future generation. It modernizes everything in the electrical grid between any point of generation and consumption. Due to the addition of some advanced technologies such as flexibility, reliability and easy maintenance the normal electrical grid becomes SMART GRID. It can intelligently integrate the actions of all users connected to it. Smart grid provides some specific functionality such as connecting generator of different sizes and using automation function to improve reliability and stability. Consumer can get the choice of supply through smart grid by the deployment of smart meters. In-spite of having so many advanced technologies this system is highly vulnerable to the security attacks. Hence the impact of cyber security is much more needed for smart grid. In this paper, a survey about the different kinds of attacks in smart grid is discussed.
WITHDRAWN: Wireless sensor network and IoT based systems for healthcare application
Internet of Things (IoT) for healthcare applications is an emerging research field that has gained a lot of attention in the last few years. The paper presents the IoT remote healthcare monitoring system that provides the patient's conditions through Web browser. For simulation purposes, we use Contiki OS with 6LoWPAN protocol stack, and Cooja, the built-in Contiki simulator. CoAP is selected as application level protocol for remote data access and representation.
RETRACTED: Hasanin et al. Exploration of Despair Eccentricities Based on Scale Metrics with Feature Sampling Using a Deep Learning Algorithm. Diagnostics 2022, 12, 2844
The journal retracts the article titled “Exploration of Despair Eccentricities Based on Scale Metrics with Feature Sampling Using a Deep Learning Algorithm” [...]
Unveiling CyberFortis: A Unified Security Framework for IIoT-SCADA Systems with SiamDQN-AE FusionNet and PopHydra Optimizer
Protecting Supervisory Control and Data Acquisition-Industrial Internet of Things (SCADA-IIoT) systems against intruders has become essential since industrial control systems now oversee critical infrastructure, and cyber attackers more frequently target these systems. Due to their connection of physical assets with digital networks, SCADA-IIoT systems face substantial risks from multiple attack types, including Distributed Denial of Service (DDoS), spoofing, and more advanced intrusion methods. Previous research in this field faces challenges due to insufficient solutions, as current intrusion detection systems lack the necessary accuracy, scalability, and adaptability needed for IIoT environments. This paper introduces CyberFortis, a novel cybersecurity framework aimed at detecting and preventing cyber threats in SCADA-IIoT systems. CyberFortis presents two key innovations: Firstly, Siamese Double Deep Q-Network with Autoencoders (Siamdqn-AE) FusionNet, which enhances intrusion detection by combining deep Q-Networks with autoencoders for improved attack detection and feature extraction; and secondly, the PopHydra Optimiser, an innovative solution to compute reinforcement learning discount factors for better model performance and convergence. This method combines Siamese deep Q-Networks with autoencoders to create a system that can detect different types of attacks more effectively and adapt to new challenges. CyberFortis is better than current top attack detection systems, showing higher scores in important areas like accuracy, precision, recall, and F1-score, based on data from CICIoT 2023, UNSW-NB 15, and WUSTL-IIoT datasets. Results from the proposed framework show a 97. 5% accuracy rate, indicating its potential as an effective solution for SCADA-IIoT cybersecurity against emerging threats. The research confirms that the proposed security and resilience methods are successful in protecting vital industrial control systems within their operational environments.
A zero-shot LLM framework for multimodal grievance classification, urgency scoring, and abuse detection in civic feedback systems
A unified model is presented for civic grievance redressal, integrating multimodal complaint intake, zero-shot semantic routing, sentiment-derived urgency estimation, and behavior-sensitive abuse detection within a scalable microservice architecture. The framework consolidates components that are typically handled independently by combining transformer-based text processing, CTC-enabled speech transcription, affective-intensity modeling, and longitudinal user-behavior analysis into a coherent decision pipeline. Typed and spoken complaints are projected into a shared semantic representation using a MobileBERT zero-shot classifier, while a recurrent neural network trained with Connectionist Temporal Classification (CTC) provides robust transcription of multilingual and dialect-rich voice submissions. Urgency indicators obtained from lexicon-based sentiment analysis are incorporated into time-aware escalation logic, and abuse mitigation integrates toxicity scores with a repetition-weighted behavioral model to identify and regulate systematic misuse. The platform operates as a containerized microservice ecosystem with WebSocket-enabled real-time updates and AES-encrypted data storage. Experiments conducted on a 1000-sample multimodal dataset show consistent performance, including 92.4% routing accuracy, 0.041 MAE in urgency estimation, 96.2% toxicity precision, 96.8% SLA compliance, and sub-150 ms end-to-end latency. These outcomes indicate suitability for deployment in linguistically diverse and resource-constrained civic environments. Planned extensions include enhanced multilingual ASR, adversarially robust toxicity modeling, and incorporation of image-based grievance modalities.
Aim: This study illustrates the significance of transport units in monitoring diverse paths using a critical system model. The suggested method identifies proficiency and framework patterns that evolve across different time intervals, utilising machine learning optimisation that incorporates sequence learning with interconnected neural networks. Background: As an increasing number of cars are interconnected for data communication to illustrate available routes, it is essential to have suitable connectivity for transportation units. This study may facilitate intelligent connectivity across transportation units by employing essential shifts without compromising the efficiency of connected units. Objective: This study aimed to integrate the parametric design representations with neural networks to address the primary goal of min-max functions, hence enhancing the efficiency of transportation units. Method: The method presented here has employed sequenced learning patterns to select the shortest path while rapidly altering pathway representations. Results: The alterations in pathways influenced by emissions have been noted and excluded from connectivity units to enhance the overall lifetime of transportation units in the projected model. Conclusion: The results have been examined through a simulation framework encompassing four scenarios, wherein potential connectedness has enhanced both the proficiency rate and the structure while minimising the shifts. Subsequently, a comparison of the proposed method with the existing methodology, where total efficiency has been assessed, has revealed the proposed method to maximise the efficiency to 95%. In contrast, the existing strategy has yielded a reduced efficiency of 86%.
Author Correction: Ensemble machine learning framework for predicting maternal health risk during pregnancy
Correction to: Scientific Reportshttps://doi.org/10.1038/s41598-024-71934-x, published online 14 September 2024 The Funding section in the original version of this Article was omitted. The Funding section now reads: “This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia has funded this project, under grant no. (GPIP: 1834-611-2024). The authors, therefore, acknowledge with thanks DSR for technical and financial support.” The original Article has been corrected.
Maternal health risks can cause a range of complications for women during pregnancy. High blood pressure, abnormal glucose levels, depression, anxiety, and other maternal health conditions can all lead to pregnancy complications. Proper identification and monitoring of risk factors can assist to reduce pregnancy complications. The primary goal of this research is to use real-world datasets to identify and predict Maternal Health Risk (MHR) factors. As a result, we developed and implemented the Quad-Ensemble Machine Learning framework to predict Maternal Health Risk Classification (QEML-MHRC). The methodology used a variety of Machine Learning (ML) models, which then integrated with four ensemble ML techniques to improve prediction. The dataset collected from various maternity hospitals and clinics subjected to nineteen training and testing tests. According to the exploratory data analysis, the most significant risk factors for pregnant women include high blood pressure, low blood pressure, and high blood sugar levels. The study proposed a novel approach to dealing with high-risk factors linked to maternal health. Dealing with class-specific performance elaborated further to properly un-derstand the distinction between high, low, and medium risks. All tests yielded outstanding results when pre-dicting the amount of risk during pregnancy. In terms of class performance, the dataset associated with the "HR" class outperformed the others, predicting 90% correctly. GBT with ensemble stacking outperformed and demonstrated remarkable performance for all evaluation measure (0.86) across all classes in the dataset. The key success of the models used in this work is the ability to measure model performance using a class-wise distribution. The proposed approach can help medical experts assess maternal health risks, saving lives and preventing complications throughout pregnancy. The prediction approach presented in this study can detect high-risk pregnancies early on, allowing for timely intervention and treatment. This study’s development and findings have the potential to raise public awareness of maternal health issues.
Hyper spectral image classifications for monitoring harvests in agriculture using fly optimization algorithm
Many cutting-edge technologies with regard to agricultural applications are not being employed by farmers for a number of reasons, including the fact that each piece of designed equipment is manufactured for a specific utilisation mechanism. In contrast, the use of hyperspectral remote sensing techniques is expanding to deliver more valuable data at a cheaper cost. The hyperspectral images are created to operate in different locations using different band topologies, making the proposed model more practical and effective with the existing spectrum elements. Since flies movements are used to acquire hyperspectral images with straight-line perception, the proposed method also makes use of the bio-inspired Fly Optimization Algorithm (FOA). The functional efficacy, loss prevention, and error prevention of the FOA, which averages 83 percent, show that it is far better than current practises.
Improved Security for Multimedia Data Visualization using Hierarchical Clustering Algorithm
In this paper, a realization technique is designed with a unique analytical model for transmitting multimedia data to appropriate end users. Transmission of multimedia data to all end users through a variety of visualization methods is the foundation of future computer systems. Yet, highly limited system resources prevent the updating of the methods used to manage multimedia data. Hence, a high-end visualization technique where uncertainties are eliminated is required for the visualization process with a multimedia system. As a result, the suggested system incorporates a clustering technique utilizing an analytical framework to ensure a high degree of transmission for all multimedia data. The technical contribution of the proposed method depends on a multimedia visualization process that takes place with high security features by including necessary parametric relationships such as occurrence of jitter, data density points, time period, multimedia storage, data smoothness and distance. For the established parametric relationship the validation methodology is integrated with a hierarchical clustering algorithm, thereby transmitting every clustered data with high security feature, thereby the examined outcomes under five scenarios proves that data security which is represented by simulation outcomes is improved to 88% as compared to the existing approach.
LSTM-Based RNN Framework to Remove Motion Artifacts in Dynamic Multicontrast MR Images with Registration Model
Today, many people under the age of 10 are being examined for brain-related issues, including tumours, without displaying any symptoms. It is not unusual for children to develop brain-related concerns such as tumours and central nervous system disorders, which may affect 15% of the population. Medical experts believe that the irregular eating habits (junk food) and the consumption of pesticide-tainted fruits and vegetables are to blame. The human body is naturally resistant to harmful gears, but only up to a point. If it exceeds the limit, a cell manipulation process is automatically initiated that can remove dangerous inactive tissues from the cell membrane and later grows into tumour blockage in the human body. Thus, the adoption of an advanced computer-based diagnostic system is highly recommended in order to generate visually enhanced images for anomaly identification and infectious tissue segmentation. In most cases, an MR image is chosen since it is easier to distinguish between affected and nonaffected tissue. Conventional convolution neural network (CCNN) mapping and feature extraction are difficult because of the vast volume of data. In addition, it takes a lengthy time for the MRI scanning process to obtain diverse positions for anomaly identification. Aside from the discomfort, the patient may experience motion abnormalities. Recurrent neural network (RNN) classifies tumour regions into several isolated portions much faster and more accurately, so that it can be prevented. To remove motion artefacts from dynamic multicontrast MR images, a novel long short-term memory- (LSTM-) based RNN framework is introduced in this research. With this method, the MR image’s visual quality is improved over CCNN while simultaneously mapping a larger volume and extracting more quiet characteristics than CCNN can. DC-CNN, SMSR-CNN, FMSI-CNN, and DRCA-CNN results are compared. For both low and high signal-to-noise ratios, the suggested LSTM-based RNN framework has gained reasonable feature intelligibility (SNRs). In comparison to previous approaches, it requires less computing and has higher accuracy when it comes to detecting infected portions.
An enhanced optimization based algorithm for intrusion detection in SCADA network
Supervisory Control and Data Acquisition (SCADA) systems are widely used in many applications including power transmission and distribution for situational awareness and control. Identifying and detecting intrusions in a SCADA is a critical and demanding task in recent days. For this purpose, various Intrusion Detection Systems (IDSs) are developed in the existing works. But, it has some drawbacks including it has high false positive and false negative rates, it cannot detect the encrypted date and it supports only for detecting the external intrusions. In order to overcome all these issues, an Intrusion Weighted Particle based Cuckoo Search Optimization (IWP-CSO) and Hierarchical Neuron Architecture based Neural Network (HNA-NN) techniques are proposed in this paper. The main intention of this paper is to detect and classify the intrusions in a SCADA network based on the optimization. At first, the input network dataset is given as the input, where the attributes are arranged and the clusters are initialized. Then, the features are optimized to select the best attributes by using the proposed IWP-CSO algorithm. Finally, the intrusions in a network are classified by employing the proposed HNA-AA algorithm. The experimental results evaluate the performance of the proposed system in terms of sensitivity, specificity, precision, recall, accuracy, Jaccard, Dice and false detection rate.
Optimal power generation of proton exchange membrane fuel cell using ANFIS based MPPT algorithm
Fuel cells are the most promising energy source for the future energy demand. The automobile industry is looking at the integration of fuel cells with electric vehicles (EV). This integration comes with many challenges like dynamic operational behaviors. For operating the fuel cell with maximum efficiency, this work proposes an Adaptive Neuro Fuzzy Inference System (ANFIS) based Maximum Power Point Tracking (MPPT) method. The hydrogen flow rate, pressure and stack temperature are the parameters considered to track the maximum power point of the fuel cell. The ANFIS-MPPT algorithm has been integrated with the 1.26 kW fuel cell in MATLAB/Simulink® and validated in different scenarios like dynamic variation in hydrogen pressure, stack temperature, load variation. The performance has been observed and compared with the conventional MPPT algorithms of Perturb and Observe (P&O) algorithm and Incremental Conductance (InC) algorithm. The proposed ANFIS-MPPT algorithm improves the power stability by 10–15% than the P&O and InC methods. Also, the proposed ANFIS-MPPT has 30% faster response as compared to the P&O algorithm, and 23% than the InC algorithm. From the analysis, it is observed that the ANFIS, P&O and InC methods are having the response time of 2.5 s, 3.6 s and 4.5 s respectively. Also the ANFIS method delivers the maximum power output of 1.26 kW, whereas the P&O and InC deliver 1.13 kW, 1.19 kW respectively. The detailed simulation analysis and results are presented in this paper.
Detecting impersonators in examination halls using AI
Detecting impersonators in examination halls is very significant to conduct the examination fairly. Recently there were occasions of potential impersonators taking tests instead of the intended person. To overcome this issue, we require an efficient method with less manpower. With the advancement of machine learning and AI technology, we can overcome this issue. In this project, we are developing an AI system where images of students are saved and the model is developed using transfer learning process to get accurate results. If the student is an allowed one, it shows the hall ticket number and name of the student otherwise it appears unknown tag.
Malware Detection Classification using Recurrent Neural Network
Nowadays, increasing numbers of malicious programs are becoming a serious problem, which increases the need for automated detection and categorization of potential threats. These attacks often use undetected malware that is not recognized by the security vendor, making it difficult to protect the endpoints from viruses. Existing methods have been proposed to detect malware. However, as malware variations develop, they can lead to misdiagnosis and are difficult to diagnose accurately. To address this problem, in this work introduces a Recurrent Neural Network (RNN) to identify the malware or benign based on extract features using Information Gain Absolute Feature Selection (IGAFS) technique. First, Malware detection dataset is collected from kaggle repository. Then the proposed pre-process the dataset for removing null and noisy values to prepare the dataset. Next, the proposed Information Gain Absolute Feature Selection (IGAFS) technique is used to select most relevant features for malware from the pre-processed dataset. Selected features are trained into Recurrent Neural Network (RNN) method to classify as malware or not with better accuracy and false rate. The experimental result provides greater performance compared with previous methods.
Content and Popularity-Based Music Recommendation System
The future of many modern technologies includes machine learning and deep learning methodologies. One of the prominent applications of these technologies is the recommender system. Due to the rapid growth of the songs in digital formats, the searching and managing of songs has become a great problem. In this study, the authors developed a recommender system using popularity and rhythm content of the song. The studies compared various techniques to improve the robustness and minimal error of the system. The authors will mostly focus on content-based, popularity-based, and collaborative-based filtering algorithms and also try to combine them using a hybrid approach. The authors utilized MAE for comparing the several procedures implemented here for the recommendation. Out of all procedures used, SVD performed well with MAE of 1.60 while KNN didn't perform that well as the authors had fewer features of song with mean absolute error of 2.212. User-relied and item-relied prototypes performed the best with MAE of 0.931 and 0.629.
CNN based Prediction Analysis for Web Phishing Prevention
Phishing has grown into one of the major and supreme operative in cyber threats, triggering millions of data breaches and security failures every year. This paper proposes a CNN based prediction analysis using Optimistic Multi centric feature extraction for phishing attack detection technique that uses only URL functions to expedite and accurately locate phishing websites and explore structured databases. Using anti-phishing technology requires experts to extract the characteristics of phishing URL sites and use behavioral principle which is identified through URL Behavioral Rectifier (U-BR). Then the feature dependability is supported to create URL probability phishing index (U-PFI) to identify the relative weight for detection of phishing sites. By attaining the feature weight, the features are observed using Optimized Multi Centric Feature Selection (OMCFS) issued to reduce the dimension Log variation, and then the selected features get trained through Conventional Neural Network (CNN). This method predicts the legitimacy of URLs without the access to web content to find the phishing and depends on domain related service. The proposed technique converts URLs into standard size scales using writing embedding techniques, separates features at different levels using the CNN model, and classifies the features as risk by category.
LSTM based decision support system for swing trading in stock market
Due to the highly volatile and fluctuating nature of the Indian stock market which is influenced by a number of factors including government policies, release of a company’s financial reports, investor’s sentiment, geopolitical situation, and many others, the prediction of the stock market has been a daunting task for traders. In this study, a Long Short Term Memory enforced Decision Support System is developed for swing traders to accurately analyze and predict the future stock values. The Decision support system generates a report which incorporates the predicted values of the company stock for the next 30 days and other technical indicators like MFI, relative RSI, the Support and Resistance of the stock price, five Fibonacci retracement levels, and the MACD and SIGNAL LINE analysis of the company and NIFTY industry average stock price. The trader can use the investment success score calculated in the report to augment his investment decisions. The results achieved by the proposed model in terms of Root Mean Square Error, Mean absolute error, and Mean Absolute Percentage Error are 4.13, 3.24, and 1.21 % respectively which establishes the efficacy of the proposed technique compared with the state-of-art techniques.
Enhanced SCADA IDS Security by Using MSOM Hybrid Unsupervised Algorithm
In Self-Organizing Maps (SOM) are unsupervised neural networks that cluster high dimensional data and transform complex inputs into easily understandable inputs. To find the closest distance and weight factor, it maps high dimensional input space to low dimensional input space. The Closest node to data point is denoted as a neuron. It classifies the input data based on these neurons. The reduction of dimensionality and grid clustering using neurons makes to observe similarities between the data. In our proposed Mutated Self Organizing Maps (MSOM) approach, we have two intentions. One is to eliminate the learning rate and to decrease the neighborhood size and the next one is to find out the outliers in the network. The first one is by calculating the median distance (MD) between each node with its neighbor nodes. Then those median values are compared with one another. In case, if any of the MD values significantly varies from the rest then it is declared as anomaly nodes. In the second phase, we find out the quantization error (QE) in each instance from the cluster center.
Improved Handover Authentication in Fifth-Generation Communication Networks Using Fuzzy Evolutionary Optimisation with Nanocore Elements in Mobile Healthcare Applications
Authentication is a suitable form of restricting the network from different types of attacks, especially in case of fifth-generation telecommunication networks, especially in healthcare applications. The handover and authentication mechanism are one such type that enables mitigation of attacks in health-related services. In this paper, we model an evolutionary model that uses a fuzzy evolutionary model in maintaining the handover and key management to improve the performance of authentication in nanocore technology-based 5G networks. The model is designed in such a way that it minimizes the delays and complexity while authenticating the networks in 5G networks. The attacks are mitigated using an evolutionary model when it is trained with the relevant attack datasets, and the model is validated to mitigate the attacks. The simulation is conducted to test the efficacy of the model, and the results of simulation show that the proposed method is effective in improving the handling and authentication and mitigation against various types of attacks in mobile health applications.
Three‐phase service level agreements and trust management model for monitoring and managing the services by trusted cloud broker
Cloud computing is an environment where everything is provided as a service based on demand. It follows pay as per the used model in which the service consumer needs to pay for what they have consumed. Due to the increased dependence on digitalization, the number of consumers and providers tends to grow tremendously. The consumer who needs the service from the provider is not sure about the specified service outcome, and it is too hard for them to monitor and manage the service. Hence, a trusted third party called a trusted cloud broker (TCB) is introduced for managing the services. The service level agreements (SLA) management and reputation estimation framework is proposed, which includes three phases such as (i) SLA establishment between the three parties, (ii) violation detection by comparing the observed value of the TCB and (iii) the reputation and penalty estimation of the service. The novel TCB is created to monitor the deployed services, ensuring the achievement of SLA. The TCB observes the values and estimates the reputation value for each service. It is compared with the provider log-based reputation value and found that the proposed model provides a more precise reputation value for the service providers.
Current teaching
Teaching Activities (2)
Sort By:
Featured First:
Search:
{"nodes": [{"id": "29683","name": "Dr Shitharth Selvarajan","jobtitle": "Senior Lecturer/Lecturer","profileimage": "/-/media/images/staff/default.jpg","profilelink": "none","department": "School Of Built Environment, Engineering And Computing","numberofpublications": "216","numberofcollaborations": "216"},{"id": "29565","name": "Dr Farrukh Saleem","jobtitle": "Senior Lecturer","profileimage": "/-/media/images/staff/dr-farrukh-saleem.png","profilelink": "/staff/dr-farrukh-saleem/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "44","numberofcollaborations": "3"}],"links": [{"source": "29683","target": "29565"}]}
Dr Sidhu Selvarajan
29683


