Leeds Beckett University - City Campus,
Woodhouse Lane,
LS1 3HE
Dr Farrukh Saleem
Senior Lecturer
Farrukh has 20 years of experience in education, mainly in teaching, curriculum building, and publishing research articles in the computing field. He holds a Ph.D. in computer science from the University of Technology Malaysia, Malaysia.
About
Farrukh has 20 years of experience in education, mainly in teaching, curriculum building, and publishing research articles in the computing field. He holds a Ph.D. in computer science from the University of Technology Malaysia, Malaysia.
Farrukh is currently working as a Senior Lecturer in School of Built Environment, Engineering, and Computing at Leeds Beckett University, Leeds, UK. Previously, he worked as an Assistant Professor, Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia. He received his Ph.D. degree in computer science from the University of Technology Malaysia.
Farrukh has taught and led a wide range of undergraduate and postgraduate modules in face-to-face and online learning modes, including machine learning, data mining, data science, software engineering, artificial intelligence, decision support systems, and emerging business technologies.
His research areas include machine learning, data science, artificial intelligence, decision support systems, business intelligence, and enterprise architecture. He has also supervised students' research projects and theses for undergraduate, postgraduate, and executive MS with a wide range of research topics related to the computer science and machine learning fields. He has published 46+ international articles in journals and conferences and participated in 10 funded research projects.
In his professional career, Farrukh also performed his duties on several administrative posts in higher education, such as Director of Academic Affairs, Head of Academic Advising Unit, and Program Coordinator. In addition, he used to work as a member of the Focused Curriculum Committee, Graduate and Scientific Research Committee, and Accreditation Committee.
Related links
Research interests
His research areas include:
- Machine Learning
- Data Science
- Artificial Intelligence
- Software Engineering
- Decision Support Systems
- Business Intelligence
- Enterprise Architecture
Ask Me About
Publications (44)
Sort By:
Featured First:
Search:
Assessing the effects of information and communication technologies on organizational development: business values perspectives
Information and communication technology (ICT) projects for organizational development deal with market challenges, information handling, and the integration of multiple information systems (IS) in an organization. This research investigates how ICT projects (IS systems, etc.) affect the strategic, social, and human development in an organization. Previous studies have highlighted the advantages of ICT portfolio management techniques and return on investment approaches; the current research focused primarily on measuring business value on investment perspective. Therefore, based on the findings from the literature review, an integrated framework was proposed and validated using the case study in Saudi Arabia to evaluate the effects of ICT/IS projects from a managerial perspective. The framework consisted of a list of processes, criteria, and sub-criteria for different kinds of extracted features to measure the impact of ICT/IS projects. Our findings demonstrated that the effects of ICT projects are not limited to social and economic development, but are also categorized as strategic, managerial, informational, operational, transactional, organizational, infrastructure, and transformational development. It is hoped that the findings of the current study can inform ICT decision makers, experts, and researchers who have investigated and are doing research in this area.
Context based Adoption of Ranking and Indexing Measures for Cricket Team Ranks
There is an international cricket governing body that ranks the expertise of all the cricket playing nations, known as the International Cricket Council (ICC). The ranking system followed by the ICC relies on the winnings and defeats of the teams. The model used by the ICC to implement rankings is deficient in certain key respects. It ignores key factors like winning margin and strength of the opposition. Various measures of the ranking concept are presented in this research. The proposed methods adopt the concepts of h-Index and PageRank for presenting more comprehensive ranking metrics. The proposed approaches not only rank the teams on their losing/winning stats but also take into consideration the margin of winning and the quality of the opposition. Three cricket team ranking techniques are presented i.e., (1) Cricket Team-Index (ct-index), (2) Cricket Team Rank (CTR) and (3) Weighted Cricket Team Rank (WCTR). The proposed metrics are validated through the collection of cricket dataset, extracted from Cricinfo, having instances for all the three formats of the game i.e., T20 International (T20i), One Day International (ODI) and Test matches. The comparative analysis between the proposed and existing techniques, for all the three formats, is presented as well.
Bloom's taxonomy: A beneficial tool for learning and assessing students’ competency levels in computer programming using empirical analysis
Previous research on computer programming advocates that most computer science students, especially novices, lack programming competencies. The reasons given for this inadequacy is that most students lack the background knowledge, first experience of programming, and a new environment of writing programs in a syntax specific language, and so forth. Due to these reasons, the failure rate is high every year. Several researchers have used learning taxonomies; in that, Bloom's taxonomy has been widely used for assessment and learning of programming. Moreover, Bloom's taxonomy has been used as a scale for preparing the assessment questions, and the competency level was quantified based on that. In contrast, this study proposes a novel approach of programming assessment, in which the achieved competency level of a student is mapped to the respective cognitive levels of Bloom's taxonomy directly from the written code with no prior mapping of questions. The computation of the competency level in terms of mapping to the respective cognitive level is based on some principal criteria extricated from theories used in previous studies. Furthermore, this study emphasizes the basic topics of the structure programming course: Selection, repetition, and modular. The data collection was carried out from 213 students using an empirical test that is further analyzed through Structural Equation Modeling. The results show that Bloom's taxonomy is a beneficial tool for learning and assessing programming.
Enterprise Architecture and Organizational Benefits: A Case Study
Enterprise architecture (EA) is a framework that consists of multiple processes to align business strategies with information technology (IT) architecture. It helps the organization standardize business operations and incorporate systems in different layers to achieve business goals and organizational benefits. This study focuses on identifying organizational benefits that can be achieved through EA implementation. The study comprises three main phases: (i) benefits realization (from literature review), (ii) benefits reconfirmation (from EA experts), and (iii) benefits validation (through a case study). Specifically, the benefits considered in this study are related to EA products, services, and strategies are known as: (i) business agility, (ii) creating competitive advantage, and (iii) increasing value. The study covers a vast literature review to define the current status of EA and organizational benefits. In addition, the study incorporates a number of measuring factors for each EA benefits with the help of a literature review. The initial findings reconfirmed and modified based on the experts’ opinions collected through interview sessions. The research applied the grounded theory and qualitative approach to analyze the interview sessions. Accordingly, using the experts’ advice, we proposed a model to show the steps and guidelines for assessing EA organizational benefits using corresponding measuring factors and sub-criteria. Finally, the proposed model validated through an in-depth case study to get final confirmation and see the model fits reality. Overall, this research highlight the potential benefits an organization can achieve from EA framework implementation. The proposed framework can assist EA decision-makers to understand and realize the EA benefits and its assessment process.
Impact of COVID-19 on the Educational Process in Saudi Arabia: A Technology–Organization–Environment Framework
The lockdown of universities and educational institutions during the COVID-19 pandemic has negatively impacted the educational process. Saudi Arabia became a forerunner during COVID-19 by taking initial precautions of curfews and total restrictions. However, these restrictions had a disruptive effect on various sectors, specifically the educational sector. The Ministry of Education strived to cope with the consequences of these changes swiftly by shifting to online education. This paper aims to study the impact of COVID-19 on the educational process through a comparative study of the responses collected from different cases, and the challenges that are faced throughout the educational process. The study conducted a cross-sectional, self-administered online questionnaire during the outbreak and distance learning, which was designed based on the Technology–Organization–Environment (TOE) framework of students. Most questions used a five-point Likert scale. The responses were randomly collected from 150 undergraduate and postgraduate students who were studying in Saudi Arabian universities, to study the overall performance of education institutions during COVID-19. The collected data were analyzed and compared to the results in the literature. The main factors impacted by this transformation are addressed. These factors are based on research and observations and aim to overcome the encountered limitations and to present their level of impact on distance education. The research framework can be useful for higher educational authorities aiming to overcome the issues highlighted and discussed in this study.
Building framework for ict investments evaluation: Value on investment perspective
Innovation in the field of information and communication technology (ICT) requires organizations to make prompt investments in providing customers with updated resources. Rapid growth in the field of ICT increases the responsibility of decision-makers in terms of whether to invest or not. This research aims to support such decision making by providing a comprehensive framework for the evaluation of ICT investment. Return on investment (ROI) is the common method used to assess the benefits generated from any type of investment using financial factors. However, this research primarily supports the importance of measuring value on investment (VOI) from ICT investments. This type of evaluation will provide more comprehensive results based on the influence of stakeholders through investments. The vast literature reviewed in this article discusses the limitations of ROI and evaluates ICT investment to present several practitioners' points of view regarding its limitations and importance. Finally, this research proposes a five-phased strategy of evaluation involving investments, exploring from the point of investment until impact is made on the stakeholder. A stepwise description of the framework is presented in the methodology section to guide decision-makers in the further implementation of this technique to protect their assets and evaluate the investments in an extensive way.
Developing a Holistic Model for Assessing the ICT Impact on Organizations: A Managerial Perspective
Organizations are currently more dependent on Information and Communication Technology (ICT) resources. The main purpose of this research is to help the organization in order to maintain the quality of their ICT project based on evaluation criteria presented in this research. This paper followed several steps to support the methodology section. Firstly, an experimental investigation conducted to explore the values assessment criterion, an organization may realize from ICT project such as information systems, enterprise systems and IT infrastructure. Secondly, the investigation is further based on empirical data collected and analyzed from the respondents of six case studies using questionnaire based on the findings of literature review. Finally this paper propose the development of a holistic model for assessing business values of ICT from the managerial point of view based on measured factors. The study has contributed in this field practically and theoretically, as the literature has not shown a holistic approach of used eight distinct dimensions for assessing ICT impact over business values. It has combined the previous researches in a manner to extend the dimensions of measuring ICT business values. The model has shown its significance for managers and ICT decision makers to align between business strategies and ICT strategies. The findings suggest that ICT positively support business processes and several other business values dimensions. The proposed holistic model and identified factors can be useful for managers to measure the impact of emerging ICT on business and organizational values.
Web Observatory Insights: Past, Current, and Future
In the present era of Big Data, with continuously increasing amounts of user-generated content, it is becoming a challenge to understand the relation between the content that is available on the Web and the users who are generating that content. Researchers have come up with many ways to understand today's Web better. One of the recently introduced concepts is a Web observatory (WO). This article provides a deep understanding about web observatories. It discusses the status of existing WO systems. The article investigates and gathers the common practices of WOs. This research has implications for researchers and communities in the adoption of the WO concept. The article highlights the challenges of WOs, such as data crawling, privacy and security. It also provides future research and development directions. The article provides a comparative analysis of existing WOs. It discusses the architecture of WOs. It presents components of a WO in a coherent manner and finally provides insights into challenges and limitations of WOs.
Impact Assessment of COVID-19 Pandemic Through Machine Learning Models
Ever since its outbreak in the Wuhan city of China, COVID-19 pandemic has engulfed more than 211 countries in the world, leaving a trail of unprecedented fatalities. Even more debilitating than the infection itself, were the restrictions like lockdowns and quarantine measures taken to contain the spread of Coronavirus. Such enforced alienation affected both the mental and social condition of people significantly. Social interactions and congregations are not only integral part of work life but also form the basis of human evolvement. However, COVID-19 brought all such communication to a grinding halt. Digital interactions have failed to enthuse the fervor that one enjoys in face-to-face meets. The pandemic has shoved the entire planet into an unstable state. The main focus and aim of the proposed study is to assess the impact of the pandemic on different aspects of the society in Saudi Arabia. To achieve this objective, the study analyzes two perspectives: the early approach, and the late approach of COVID-19 and the consequent effects on different aspects of the society. We used a Machine Learning based framework for the prediction of the impact of COVID-19 on the key aspects of society. Findings of this research study indicate that financial resources were the worst affected. Several countries are facing economic upheavals due to the pandemic and COVID-19 has had a considerable impact on the lives as well as the livelihoods of people. Yet the damage is not irretrievable and the world’s societies can emerge out of this setback through concerted efforts in all facets of life.
Reliable Prediction Models Based on Enriched Data for Identifying the Mode of Childbirth by Using Machine Learning Methods: Development Study
Background: The use of artificial intelligence has revolutionized every area of life such as business and trade, social and electronic media, education and learning, manufacturing industries, medicine and sciences, and every other sector. The new reforms and advanced technologies of artificial intelligence have enabled data analysts to transmute raw data generated by these sectors into meaningful insights for an effective decision-making process. Health care is one of the integral sectors where a large amount of data is generated daily, and making effective decisions based on these data is therefore a challenge. In this study, cases related to childbirth either by the traditional method of vaginal delivery or cesarean delivery were investigated. Cesarean delivery is performed to save both the mother and the fetus when complications related to vaginal birth arise. Objective: The aim of this study was to develop reliable prediction models for a maternity care decision support system to predict the mode of delivery before childbirth. Methods: This study was conducted in 2 parts for identifying the mode of childbirth: first, the existing data set was enriched and second, previous medical records about the mode of delivery were investigated using machine learning algorithms and by extracting meaningful insights from unseen cases. Several prediction models were trained to achieve this objective, such as decision tree, random forest, AdaBoostM1, bagging, and k-nearest neighbor, based on original and enriched data sets. Results: The prediction models based on enriched data performed well in terms of accuracy, sensitivity, specificity, F-measure, and receiver operating characteristic curves in the outcomes. Specifically, the accuracy of k-nearest neighbor was 84.38%, that of bagging was 83.75%, that of random forest was 83.13%, that of decision tree was 81.25%, and that of AdaBoostM1 was 80.63%. Enrichment of the data set had a good impact on improving the accuracy of the prediction process, which supports maternity care practitioners in making decisions in critical cases. Conclusions: Our study shows that enriching the data set improves the accuracy of the prediction process, thereby supporting maternity care practitioners in making informed decisions in critical cases. The enriched data set used in this study yields good results, but this data set can become even better if the records are increased with real clinical data.
Requirements Elicitation Techniques in Mobile Applications
The common view of requirements engineering consists of requirement elicitation, specification, validation, and evolution. Requirement elicitation is a significant stage to assure the quality of the requirement documentations and would affect the project success or failure. There are different techniques to elicit the requirements. Each technique has its advantages and disadvantages. Thus, it is important to adopt more than one kind of technique to describe a system clearly from different viewpoints. This study prospected some of the research papers in the same field to display the requirements elicitation techniques that are used in developing mobile applications from a variety of research disciplines. It suggests some requirements elicitation concepts to guide software engineers in selecting the techniques according to customers' needs and to show the common challenges that they face.
Intelligent Decision Support System for Predicting Student’s E-Learning Performance Using Ensemble Machine Learning
Electronic learning management systems provide live environments for students and faculty members to connect with their institutional online portals and perform educational activities virtually. Although modern technologies proactively support these online sessions, students’ active participation remains a challenge that has been discussed in previous research. Additionally, one concern for both parents and teachers is how to accurately measure student performance using different attributes collected during online sessions. Therefore, the research idea undertaken in this study is to understand and predict the performance of the students based on features extracted from electronic learning management systems. The dataset chosen in this study belongs to one of the learning management systems providing a number of features predicting student’s performance. The integrated machine learning model proposed in this research can be useful to make proactive and intelligent decisions according to student performance evaluated through the electronic system’s data. The proposed model consists of five traditional machine learning algorithms, which are further enhanced by applying four ensemble techniques: bagging, boosting, stacking, and voting. The overall F1 scores of the single models are as follows: DT (0.675), RF (0.777), GBT (0.714), NB (0.654), and KNN (0.664). The model performance has shown remarkable improvement using ensemble approaches. The stacking model by combining all five classifiers has outperformed and recorded the highest F1 score (0.8195) among other ensemble methods. The integration of the ML models has improved the prediction ratio and performed better than all other ensemble approaches. The proposed model can be useful for predicting student performance and helping educators to make informed decisions by proactively notifying the students.
A Rule-Based Method for Cognitive Competency Assessment in Computer Programming Using Bloom’s Taxonomy
Assessment of students in computer programming is a challenge for instructors, especially at the introductory programming level, where the number of student enrollment is typically high. Therefore, this study presents a novel approach to assessing students' competency in programming using Bloom's taxonomy. The novelty of the presented approach is based on some rules that quantify the attained competencies with respect to the cognitive levels of Bloom's taxonomy. Unlike previous studies, in which cognitive levels were used as a scale for making the questions while the competency assessment was manually performed, in this study, the rule-based assessment method uses the automatic decision-making process to map the students' competency level directly to the corresponding cognitive levels from the written code without the prior mapping of questions to the cognitive levels. For this reason, the study focuses on the basic topics of the structured Java programming language (i.e. selection, repetition, and modular). The rule-based assessment method has been applied to students' programming code in the introductory level Java course. Data collection has been carried out through conducting an empirical test in which the valid responses of 213 students were collected, which was processed through the rule-based method for competency assessment. Moreover, the quantitative results achieved from the rule-based assessment method were validated by comparing them with the results achieved from the manual assessment. Furthermore, for comparative analysis, several statistical methods were used to identify the difference between the results of the two assessment methods. The outcomes of the comparative analysis have shown the reliability of the proposed rule-based assessment method.
During the COVID-19 pandemic, the analysis of patient data has become a cornerstone for developing effective public health strategies. This study leverages a dataset comprising over 10,000 anonymized patient records from various leading medical institutions to predict COVID-19 patient age groups using a suite of statistical and machine learning techniques. Initially, extensive statistical tests including ANOVA and t-tests were utilized to assess relationships among demographic and symptomatic variables. The study then employed machine learning models such as Decision Tree, Naïve Bayes, KNN, Gradient Boosted Trees, Support Vector Machine, and Random Forest, with rigorous data preprocessing to enhance model accuracy. Further improvements were sought through ensemble methods; bagging, boosting, and stacking. Our findings indicate strong associations between key symptoms and patient age groups, with ensemble methods significantly enhancing model accuracy. Specifically, stacking applied with random forest as a meta leaner exhibited the highest accuracy (0.7054). In addition, the implementation of stacking techniques notably improved the performance of K-Nearest Neighbors (from 0.529 to 0.63) and Naïve Bayes (from 0.554 to 0.622) and demonstrated the most successful prediction method. The study aimed to understand the number of symptoms identified in COVID-19 patients and their association with different age groups. The results can assist doctors and higher authorities in improving treatment strategies. Additionally, several decision-making techniques can be applied during pandemic, tailored to specific age groups, such as resource allocation, medicine availability, vaccine development, and treatment strategies. The integration of these predictive models into clinical settings could support real-time public health responses and targeted intervention strategies.
A Unified Decision-Making Technique for Analysing Treatments in Pandemic Context
The COVID-19 pandemic has triggered a global humanitarian disaster that has never been seen before. Medical experts, on the other hand, are undecided on the most valuable treatments of therapy because people ill with this infection exhibit a wide range of illness indications at different phases of infection. Further, this project aims to undertake an experimental investigation to determine which treatments for COVID-19 disease is the most effective and preferable. The research analysis is based on vast data gathered from professionals and research journals, making this study a comprehensive reference. To solve this challenging task, the researchers used the HF AHP-TOPSIS Methodology, which is a well-known and highly effective Multi-Criteria Decision Making (MCDM) technique. The technique assesses the many treatment options identified through various research papers and guidelines proposed by various countries, based on the recommendations of medical practitioners and professionals. The review process begins with a ranking of different treatments based on their effectiveness using the HF-AHP approach and then evaluates the results in five different hospitals chosen by the authors as alternatives. We also perform robustness analysis to validate the conclusions of our analysis. As a result, we obtained highly corroborative results that can be used as a reference. The results suggest that convalescent plasma has the greatest rank and priority in terms of effectiveness and demand, implying that convalescent plasma is the most effective treatment for SARS-CoV-2 in our opinion. Peepli also has the lowest priority in the estimation.
Optimal Machine Learning Driven Sentiment Analysis on COVID-19 Twitter Data
The outbreak of the pandemic, caused by Coronavirus Disease 2019 (COVID-19), has affected the daily activities of people across the globe. During COVID-19 outbreak and the successive lockdowns, Twitter was heavily used and the number of tweets regarding COVID-19 increased tremendously. Several studies used Sentiment Analysis (SA) to analyze the emotions expressed through tweets upon COVID-19. Therefore, in current study, a new Artificial Bee Colony (ABC) with Machine Learning-driven SA (ABCML-SA) model is developed for conducting Sentiment Analysis of COVID-19 Twitter data. The prime focus of the presented ABCML-SA model is to recognize the sentiments expressed in tweets made upon COVID-19. It involves data pre-processing at the initial stage followed by n-gram based feature extraction to derive the feature vectors. For identification and classification of the sentiments, the Support Vector Machine (SVM) model is exploited. At last, the ABC algorithm is applied to fine tune the parameters involved in SVM. To demonstrate the improved performance of the proposed ABCML-SA model, a sequence of simulations was conducted. The comparative assessment results confirmed the effectual performance of the proposed ABCML-SA model over other approaches.
Efficient implementation of data mining: Improve customer's behaviour
Evaluating the performance of any organization is an essential part for overcoming their weaknesses. Customer is always on prior for finding and assessing the company's performance. They are always respectable for every organization. In this paper we first examine the Customer Relationship Management (CRM), especially customer behaviour and customer profiling. Then we describe the general overview of most common data mining techniques. The main purpose of this paper is how data mining techniques can extract respectable knowledge from the large customer's database and how to analyze customer behaviour to improve business performance. Therefore, we proposed a model for CRM with the efficient implementation of data mining, for improving customer behaviour. For this, we evaluate and analyze the customer understanding by using rule induction process on clustered data from customer's database with reference to the customer query. © 2009 IEEE.
Data mining strategies and techniques for CRM systems
Whenever millions of data is being stored in database regularly, data mining is responsible to discover the hidden knowledge, rules and patterns from it. Data mining is going to be involved in every organization for extracting extra information which are not visible for everyone. Organizations always planning to get useful information from it. Though, study on customer relationship management (CRM) is reaching more practical and attractive factor for the growth of every organization in the same way, discovery the hidden gold is also supporting to achieve the goal and for the success of organization. The main critical success factor for any (CRM) includes, Marketing Management, Customer Support Management, Sales Management and Facilities Management, etc. In this paper we proposed, analyzed and validated that data mining is also a major success factor in the success of CRM. We first presented the CRM model and then explained the main role of each feature, then we add data mining feature in the CRM model. Further more, we applied data mining strategies and techniques for the generation of new rules and patterns. We talk about that within the boundaries of CRM strategies the data mining tool also play an affective and valuable role for the establishment and growth of the organization. © 2009 IEEE.
Automatic Personality Recognition (APR) has received much attention in recent years due to its wide range of important applications across various fields. The growing use of online social networks provides valuable opportunities for APR, as a strong correlation has been found between what users post on these platforms and their personality traits. Consequently, various APR models have been developed to infer the Big Five personality traits from social media user-generated texts. However, most of these models heavily relied on hand-crafted features, which are unable to capture deep contextual information and learn complex patterns from texts. More importantly, the performance of text-based APR is still unsatisfactory, especially at the level of each personality dimension. To tackle this issue, we propose a new model, called APR_ConvLSTM, that aims to improve text-based APR performance by integrating two robust deep learning architectures: CNN and Bi-LSTM. Unlike existing APR models, the APR_ConvLSTM is a unified end-to-end model where all personality traits are predicted simultaneously and effectively without a need for laborious feature engineering. We also developed a new labeled Big Five personality dataset, called X-Big5, which has been in need for a long time in the APR field. Extensive experiments on the X-Big5 and a publicly available benchmark dataset (PAN-2015 Author Profiling) demonstrate the promising performance of our model over its contenders. Overall, the proposed model achieved the highest Accuracy and F-1 score of 79.51% and 86.54% on the PAN-2015 dataset and 87.95% and 81.35%, respectively, on the X-Big5 dataset. Moreover, it shows promising performance over its competitors, with the highest average Accuracy and F-1 score of 79.01% and 80.56%, respectively, on the combined dataset. The model reached competitive results in predicting Openness, Extraversion, Agreeableness, and Neuroticism traits with the highest F1 scores of 88.60%, 77.35%, 76.16%, and 74.52%, respectively, on the combined dataset. The proposed model can positively impact the analysis of social media text generated by different users and help identify their personality traits.
The effects of Data Mining in ERP-CRM model - A case study of MADAR
As Enterprise Resource Planning (ERP) implementation has become more popular and suitable for every business organization, it has become a essential factor for the success of a business. This paper shows the best integration of ERP with Customer Relationship Management (CRM). Data Mining is overwhelming the integration in this model by giving support for applying best algorithm to make the successful result. This model has three major parts, outer view-CRM, inner view-ERP and knowledge discovery view. The CRM collect the customer's queries, EPR analyze and integrate the data and the knowledge discovery gave predictions and advises for the betterment of an organization. For the practical implementation of presented model, we use MADAR data and implemented Apriori Algorithm on it. Then the new rules and patterns suggested for the organization which helps the organization for solving the problem of customers in future correspondence.
Data Mining for Customer Queries in ERP Model A Case Study
Customers always play a key role for the establishment or mean of crisis for any organization. In this paper, we applied data mining techniques on MADAR data for the perfection and development of the organization as well as making their customers more and more satisfied and contented. In the presented model we specially kept customers on the top and emphasized and highlighted the role of customers for every organization. By using their characteristics and surroundings we clustered the data on the basis of action taken against the raised question. In addition, the clustered data employed on the Apriori algorithm and finally, we discovered new rules and patterns from the database for formulating the process in adequate and satisfactory milieu. For the best implementation we used two data mining techniques; Clustering and Association Mining, to get most valuable, informative and strong results for this organization. This is the way to have the best association and gratify their customer in future. © 2009 IEEE.
The Learning Management System (LMS) is an essential tool for educational insti-tutions that facilitates content delivery, assessments, lecture delivery, and collabora-tion to enhance the learning experience. This study explores the role of LMS in cre-ating an effective learning environment to improve students’ academic performance. To achieve the main objective of this study, we utilized a dataset [xAPI-Edu-Data] comprising multiple factors, such as academic, psychological, and cognitive en-gagement. Various machine learning techniques are employed to assess the impact of engagement activities on students’ performance. Initially, a class imbalance issue identified in the dataset and addressed using SMOTE technique. In addition, other resampling strategies applied to compare the effectiveness of proposed work. The model performance evaluated and compared using different evaluation metrics be-fore and after data enrichment. In addition, hyperparameter optimization is conducted using a grid search approach to enhance models’ accuracy. The performance of in-dividual models such as support vector machine (0.81), logistic regression (0.80), and decision tree (0.75) enhanced using the enriched dataset. The integration of multiple base learners into an ensemble model, with random forest as the stacking learner, achieved a weighted precision of 0.83, improving from 0.60 with the original dataset. The implementation of the stacking approach with enriched dataset has identified a better result and improved accuracy by 23%. The key contribution of this study in-cludes identifying the effectiveness of data enrichment in improving prediction ac-curacy. Moreover, the research highlights the role of student engagement and be-havior in measuring academic performance. The proposed model can identify the factors behind low performance, allowing further actions to be taken. Based on the prediction, the educators can work on the associated factors that could be low en-gagement, participation, or attendance. The findings further indicate that better use of LMS by creating more engagement activities can enhance students’ learning.
General Characteristics and Common Practices for ICT Projects: Evaluation Perspective
In today’s business world, organizations are more dependent on Information and Communication Technologies (ICT) resources. Cloud services, communication services and software services are most common resources, enterprises are spending large amount. To install new services and upgrade existing services, ICT project are essential part of organization’s business strategies. Researchers highlighted the real problem for the organization is to initiate new ICT projects and its evaluation after implementation. This research investigated the common approaches organizations using to start with ICT projects and how to evaluate its impact on after implementation. For this, we have extracted the number of steps with the help of literature review. To validate those steps, six case studies are selected for collecting the samples. The findings of this study elaborate that every ICT project has list of objectives i.e. strategic, informational, IT infrastructure and others. Furthermore, the results highlight that organizations believe on both financial and non-financial evaluation methods based on the type of organization i.e. public or private. Moreover, measurement process applied on project wise, monthly and yearly bases. Importantly, we have found that currently outsourcing plays significant role in success of ICT projects. The results of this study can be helpful for the organization to understand the type of ICT investments, approaches and possible impact on the organizations goals.
Implementation of data mining approach for building automated decision support systems
Decision support systems (DSS) has remarkably increased in today's competitive business environment. Organizations are aggressively emphasizing on computerized support for building comprehensive automated DSS. The main purpose behind this approach is to build intelligent business and reduces the business pressure from the competitors. In this paper, we proposed a model for building efficient DSS with connection of data mining view. The presented model guide the business leaders to get extra support from data mining abstract to create effective DSS and compete the business world with more appropriate manner. Model represent the combination of DSS components and data mining tasks for the generation of better decisions, results, rules, and patterns from operational databases. © 2012 Infonomics Society.
Implementation of Data Mining Engine on CRM-improve customer satisfaction
Analysis on customer relationship is reaching more practical and motivating success factor for the growth of every company, in the same way, discovery of unseen information is also supporting for the successful expansion in an organization. A customer and a company are essential to each other and their good relationship and understanding will take the company on the top as well as the customer to the satisfactory level. In this paper we presented the model of Customer Relationship Management (CRM) to describe the association of a customer with the company and enhanced the model by connection with Data Mining Engine (DME) for evaluation the query of a customer or an employee, customer understanding to support the CRM. The main aspect of this paper is DME which is playing commanding role to bear a company on the top. Analyze and assessment of the query to understand the customer and work on organization's action, by using data mining techniques are the main characteristics of DME. © 2009 IEEE.
Detecting High-Risk Factors and Early Diagnosis of Diabetes Using Machine Learning Methods
Diabetes is a chronic disease that can cause several forms of chronic damage to the human body, including heart problems, kidney failure, depression, eye damage, and nerve damage. There are several risk factors involved in causing this disease, with some of the most common being obesity, age, insulin resistance, and hypertension. Therefore, early detection of these risk factors is vital in helping patients reverse diabetes from the early stage to live healthy lives. Machine learning (ML) is a useful tool that can easily detect diabetes from several risk factors and, based on the findings, provide a decision-based model that can help in diagnosing the disease. This study aims to detect the risk factors of diabetes using ML methods and to provide a decision support system for medical practitioners that can help them in diagnosing diabetes. Moreover, besides various other preprocessing steps, this study has used the synthetic minority over-sampling technique integrated with the edited nearest neighbor (SMOTE-ENN) method for balancing the BRFSS dataset. The SMOTE-ENN is a more powerful method than the individual SMOTE method. Several ML methods were applied to the processed BRFSS dataset and built prediction models for detecting the risk factors that can help in diagnosing diabetes patients in the early stage. The prediction models were evaluated using various measures that show the high performance of the models. The experimental results show the reliability of the proposed models, demonstrating that k-nearest neighbor (KNN) outperformed other methods with an accuracy of 98.38%, sensitivity, specificity, and ROC/AUC score of 98%. Moreover, compared with the existing state-of-the-art methods, the results confirm the efficacy of the proposed models in terms of accuracy and other evaluation measures. The use of SMOTE-ENN is more beneficial for balancing the dataset to build more accurate prediction models. This was the main reason it was possible to achieve models more accurate than the existing ones.
A General Framework for Measuring Information and Communication Technology Investment
Growing concerns of decision makers about measuring Information and Communication Technology (ICT) investments has increased the wide interest in developing and analyzing the return on investment (ROI) methodologies. Researchers and professionals have presented several models for accessing return on ICT investment from the financial and non-financial perspectives. Due to disperse dimensions of investment and its measuring return, still the government and ICT investors are puzzled to opt the best comprehensive evaluation strategy which can help them out for entire evaluation phases. Conceptually ROI can be divided into two types of measuring returns; financial and non-financial value. In this paper, we aim to develop the framework for building return methodology which can deal with both values. Therefore, we investigate the relationship between types of ICT investment and its returns value type. Stakeholder analysis and value measuring variables give strength to our proposed model and increase its feasibility. Overall the methodology has six stages to evaluate ICT investment which are dependent on its investment type, evaluation type, value measuring variables and stakeholders.
Data mining course in information system department- case study of King Abdulaziz University
Data mining (DM) is an essential course to be included in the curriculum of Information systems education. This paper highlights the importance of data Mining Course in the Information System (IS) department. We discussed the issues related with data mining texts books and practical tools, by using our previous experience with the students of Information System (IS) Department, Faculty of Computing & Information Technology (FCIT) in King Abdulaziz University (KAU) Jeddah. The paper focuses on importance of DM course, relation of DM course with IS department, discussion about selection of the text book, and finally discussion on different DM tools can be use in the labs. This paper provides comprehensive information on DM course, text books, and tools. This paper specially guides the DM students to select better course material in the form of text book and DM software tool for practical implementation of all data mining tasks. © 2011 IEEE.
A General Framework for Measuring Information and Communication Technology Investment: Case Study of Kingdom of Saudi Arabia
Growing concerns of decision makers about measuring Information and Communication Technology (ICT) investments has increased the wide interest in developing and analyzing the return on investment (ROI) methodologies. Researchers and professionals have presented several models for accessing return on ICT investment from the financial and non-financial perspectives. Due to disperse dimensions of investment and its measuring return, still the government and ICT investors are puzzled to opt the best comprehensive evaluation strategy which can help them out for entire evaluation phases. Conceptually ROI can be divided into two types of measuring returns; financial and non-financial value. In this paper, we aim to develop the framework for building return methodology which can deal with both values. Therefore, we investigate the relationship between types of ICT investment and its returns value type. Stakeholder analysis and value measuring variables give strength to our proposed model and increase its feasibility. Overall the methodology has six stages to evaluate ICT investment which are dependent on its investment type, evaluation type, value measuring variables and stakeholders. © Springer-Verlag Berlin Heidelberg 2012.
Critical success factors of ERP implementation at higher education institutes: A brief case study
Integration and replacing of the legacy system is a challenging task in medium and large organizations to automate business processes and functions across the enterprise. The process of integration and replacing of legacy systems has been initiated and observed in many medium and large organizations with successful Enterprise Resource Planning ERP systems implementation and get benefited of the sophisticated technologies and ERP business processes. Higher education institutes now turned towards to adopt new technologies with the best quality of services. For this reason, Higher education institutes centrifuged towards the ERP system implementation in their organizational setup to get all the benefits of new technologies and business processes. The research objectives of this paper is, to analyze the ERP system implementation in higher education institutes and find out the critical success factors of ERP system implementation in the literature. Also, the paper will discuss some critical success factors of ERP in literature from different higher educational institutes. A brief case study of King Saud University has been presented to discover the critical success factors for the successful ERP implementation and will strengthen the result of this paper. ©2013 International Information Institute.
Comprehensive study of Information and Communication Technology investments: A case study of Saudi Arabia
The fanatical evolution of Information and Communication Technology (ICT) boost organization enthusiasm to invest more for adopting innovative Information Technology (IT) solutions that are perceived to facilitate their business model. Researchers proposed several methods, tried to make more understanding on calculating return from ICT investment. Traditionally, measuring the return from investment was based on financial factors only such as Return on Investment (ROI) methods. Currently, non-financial based measuring techniques such as, Value on investment (VOI) is a new challenge for the investor to measure the value (non-financial) from the investment. This paper aims to investigate the process towards building an integrated framework for the evaluation of ICT/Information Systems (IS) investments inside the Kingdom of Saudi Arabia (KSA), where tremendous amount has been spending for the development of ICT services. Current status of the ICT investment, utilization and implementation inside KSA therefore are discussed. Furthermore, discussion on previously developed methodologies, models and factors for measuring investments using ROI and VOI has presented . Finally, the brief discussion on relationship between ICT investment, utilization and evaluation presented to provide some more understanding in this field. The paper representing the need of building comprehensive framework for the evaluation and measurement of ICT investment. ©2013 International Information Institute.
Enterprise application integration as a middleware: Modification in data & process layer
Currently, enterprises are running with many applications simultaneously to accomplish their business processes more appropriately and precisely. The real challenge for the enterprises is to build strong relation between those applications in terms of technical, development and data integration. Enterprise application integration (EAI) solutions are there to facilitate enterprises as an intermediary supervisor. The main purpose of EAI is to work as a middleware for supporting and integrating all business processes; internal and external. Supply chain management, customer relationship management, billing and finance are some major applications which required to be integrated especially with respect to databases. This paper discusses about the EAI, its characteristics, distribution layers, and formation. In addition, we discussed that incorporation of data mining is much helpful and positive addition in original architecture of EAI. Although data mining implementation with EAI is just to support enterprise information and processes, but it can work with almost each application's database(s) prior or after integration. Therefore an enhanced framework presented for EAI which highlights the data mining inclusion in original EAI architecture.
Developing a framework and algorithm for scalability to evaluate the performance and throughput of CRM systems
Scalability in hardware and/or software is an important factor for enhancing the performance of running processes as well as the throughput of the system of business organizations. This paper explores the need for scalability and issues related to extending the resources in order to ensure an improved and scaled-up Customer Relationship Management (CRM) architecture. The main contribution discussed in this paper is the proposal of a conceptual framework for measuring the process performance and throughput of the system beyond the selection of the type of scalability. Furthermore, this paper concerns the CRM system, as customer requests, their online transactions, and responses need a fast and efficient system. Taking into consideration all these factors, ultimately this paper proposed a customer-friendly framework for measuring the process performance and throughput of the system. Finally, the proposed framework’s steps are shown in an algorithm calculating process performance and throughput of the system.
The effect of automatic assessment on novice programming: Strengths and limitations of existing systems
Computer programming is always of high concern for students in introductory programming courses. High rates of failure occur every semester due to lack of adequate skills in programming. No student can become a programmer overnight because such learning requires proper guidance as well as consistent practice with the programming exercises. The role of instructors in the development of students' learning skills is crucial in order to provide feedback on their errors and improve their knowledge accordingly. On the other hand, due to the large number of students, instructors are also overloading themselves to focus on each individual student's errors. To address these issues, researchers have developed numerous Automatic Assessment (AA) systems that not only evaluate the students' programs but also provide instant feedback on their errors as well as abridge the workload of the instructors. Due to the large pool of existing systems, it is difficult to cover each and every system in one study. Therefore, this paper provides a comprehensive overview of some of the existing systems based on the three‐analysis approaches: dynamic, static, and hybrid. Moreover, this paper aims to discuss the strengths and limitations of these systems and suggests some potential recommendations regarding the AA specifications for novice programming, which may help in standardizing these systems.
EffiMob-Net: A Deep Learning-Based Hybrid Model for Detection and Identification of Tomato Diseases Using Leaf Images
As tomatoes are the most consumed vegetable in the world, production should be increased to fulfill the vast demand for this vegetable. Global warming, climate changes, and other significant factors, including pests, badly affect tomato plants and cause various diseases that ultimately affect the production of this vegetable. Several strategies and techniques have been adopted for detecting and averting such diseases to ensure the survival of tomato plants. Recently, the application of artificial intelligence (AI) has significantly contributed to agronomy in the detection of tomato plant diseases through leaf images. Deep learning (DL)-based techniques have been largely utilized for detecting tomato leaf diseases. This paper proposes a hybrid DL-based approach for detecting tomato plant diseases through leaf images. To accomplish the task, this study presents the fusion of two pretrained models, namely, EfficientNetB3 and MobileNet (referred to as the EffiMob-Net model) to detect tomato leaf diseases accurately. In addition, model overfitting was handled using various techniques, such as regularization, dropout, and batch normalization (BN). Hyperparameter tuning was performed to choose the optimal parameters for building the best-fitting model. The proposed hybrid EffiMob-Net model was tested on a plant village dataset containing tomato leaf disease and healthy images. This hybrid model was evaluated based on the best classifier with respect to accuracy metrics selected for detecting the diseases. The success rate of the proposed hybrid model for accurately detecting tomato leaf diseases reached 99.92%, demonstrating the model’s ability to extract features accurately. This finding shows the reliability of the proposed hybrid model as an automatic detector for tomato plant diseases that can significantly contribute to providing better solutions for detecting other crop diseases in the field of agriculture.
Machine Learning, Deep Learning, and Mathematical Models to Analyze Forecasting and Epidemiology of COVID-19: A Systematic Literature Review
COVID-19 is a disease caused by SARS-CoV-2 and has been declared a worldwide pandemic by the World Health Organization due to its rapid spread. Since the first case was identified in Wuhan, China, the battle against this deadly disease started and has disrupted almost every field of life. Medical staff and laboratories are leading from the front, but researchers from various fields and governmental agencies have also proposed healthy ideas to protect each other. In this article, a Systematic Literature Review (SLR) is presented to highlight the latest developments in analyzing the COVID-19 data using machine learning and deep learning algorithms. The number of studies related to Machine Learning (ML), Deep Learning (DL), and mathematical models discussed in this research has shown a significant impact on forecasting and the spread of COVID-19. The results and discussion presented in this study are based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Out of 218 articles selected at the first stage, 57 met the criteria and were included in the review process. The findings are therefore associated with those 57 studies, which recorded that CNN (DL) and SVM (ML) are the most used algorithms for forecasting, classification, and automatic detection. The importance of the compartmental models discussed is that the models are useful for measuring the epidemiological features of COVID-19. Current findings suggest that it will take around 1.7 to 140 days for the epidemic to double in size based on the selected studies. The 12 estimates for the basic reproduction range from 0 to 7.1. The main purpose of this research is to illustrate the use of ML, DL, and mathematical models that can be helpful for the researchers to generate valuable solutions for higher authorities and the healthcare industry to reduce the impact of this epidemic.
Energy efficiency is an important factor contributing to the sustainability and for reducing energy costs. There has been an increasing attention in residential energy performance, but detailed studies exploring cost-effectiveness analysis, predictive modelling, and adoption modelling are still lacking. This study addresses these issues by analysing a large Energy Performance Certificate (EPC) dataset in the UK in 2024, having over 4.8 million property records. The research explores retrofit costs and impact data to investigate three critical research questions. First, we evaluated the energy efficiency and CO₂ savings per pound spent across different property types in the UK, analysing 41 retrofit improvement types using statistical analysis. Second, machine learning models were trained to predict a building's energy rating from its efficiency and structural traits. Third, standard retrofit interventions were assessed for defining the actual CO₂ savings by integrating retrofit adoption probabilities and Monte Carlo simu-lations. Our results show that the highest energy efficiency per pound spent could be achieved with inexpensive improvements like low-energy lighting, installing hot water cylinders and draught proofing. The Voting Classifier model (XGB + RF) achieved the best discrimination with 70.8% outperforming XGBoost (69.4%), Random Forest (69.09%), and MLP Neural Network (59.5%). The simulations based on different adoption scenarios demonstrate that even a small increase in the adoption rates can lead to significant national CO₂ reductions. Overall, this study provides a transferable methodology that combines cost-effectiveness analysis, predictive analysis, and retrofit adoption modelling for sustainable housing research in the UK. The findings offer insightful applicability to guide retrofit priority, policy targeting, and future studies in sustainable residential energy planning.
Aim: This study illustrates the significance of transport units in monitoring diverse paths using a critical system model. The suggested method identifies proficiency and framework patterns that evolve across different time intervals, utilising machine learning optimisation that incorporates sequence learning with interconnected neural networks. Background: As an increasing number of cars are interconnected for data communication to illustrate available routes, it is essential to have suitable connectivity for transportation units. This study may facilitate intelligent connectivity across transportation units by employing essential shifts without compromising the efficiency of connected units. Objective: This study aimed to integrate the parametric design representations with neural networks to address the primary goal of min-max functions, hence enhancing the efficiency of transportation units. Method: The method presented here has employed sequenced learning patterns to select the shortest path while rapidly altering pathway representations. Results: The alterations in pathways influenced by emissions have been noted and excluded from connectivity units to enhance the overall lifetime of transportation units in the projected model. Conclusion: The results have been examined through a simulation framework encompassing four scenarios, wherein potential connectedness has enhanced both the proficiency rate and the structure while minimising the shifts. Subsequently, a comparison of the proposed method with the existing methodology, where total efficiency has been assessed, has revealed the proposed method to maximise the efficiency to 95%. In contrast, the existing strategy has yielded a reduced efficiency of 86%.
Author Correction: Ensemble machine learning framework for predicting maternal health risk during pregnancy
Correction to: Scientific Reportshttps://doi.org/10.1038/s41598-024-71934-x, published online 14 September 2024 The Funding section in the original version of this Article was omitted. The Funding section now reads: “This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia has funded this project, under grant no. (GPIP: 1834-611-2024). The authors, therefore, acknowledge with thanks DSR for technical and financial support.” The original Article has been corrected.
Maternal health risks can cause a range of complications for women during pregnancy. High blood pressure, abnormal glucose levels, depression, anxiety, and other maternal health conditions can all lead to pregnancy complications. Proper identification and monitoring of risk factors can assist to reduce pregnancy complications. The primary goal of this research is to use real-world datasets to identify and predict Maternal Health Risk (MHR) factors. As a result, we developed and implemented the Quad-Ensemble Machine Learning framework to predict Maternal Health Risk Classification (QEML-MHRC). The methodology used a variety of Machine Learning (ML) models, which then integrated with four ensemble ML techniques to improve prediction. The dataset collected from various maternity hospitals and clinics subjected to nineteen training and testing tests. According to the exploratory data analysis, the most significant risk factors for pregnant women include high blood pressure, low blood pressure, and high blood sugar levels. The study proposed a novel approach to dealing with high-risk factors linked to maternal health. Dealing with class-specific performance elaborated further to properly un-derstand the distinction between high, low, and medium risks. All tests yielded outstanding results when pre-dicting the amount of risk during pregnancy. In terms of class performance, the dataset associated with the "HR" class outperformed the others, predicting 90% correctly. GBT with ensemble stacking outperformed and demonstrated remarkable performance for all evaluation measure (0.86) across all classes in the dataset. The key success of the models used in this work is the ability to measure model performance using a class-wise distribution. The proposed approach can help medical experts assess maternal health risks, saving lives and preventing complications throughout pregnancy. The prediction approach presented in this study can detect high-risk pregnancies early on, allowing for timely intervention and treatment. This study’s development and findings have the potential to raise public awareness of maternal health issues.
Brain tumor (BT) is an awful disease and one of the foremost causes of death in human beings. BT develops mainly in 2 stages and varies by volume, form, and structure, and can be cured with special clinical procedures such as chemotherapy, radiotherapy, and surgical mediation. With revolutionary advancements in radiomics and research in medical imaging in the past few years, computer-aided diagnostic systems (CAD), especially deep learning, have played a key role in the automatic detection and diagnosing of various diseases and significantly provided accurate decision support systems for medical clinicians. Thus, convolution neural network (CNN) is a commonly utilized methodology developed for detecting various diseases from medical images because it is capable of extracting distinct features from an image under investigation. In this study, a deep learning approach is utilized to extricate distinct features from brain images in order to detect BT. Hence, CNN from scratch and transfer learning models (VGG-16, VGG-19, and LeNet-5) are developed and tested on brain images to build an intelligent decision support system for detecting BT. Since deep learning models require large volumes of data, data augmentation is used to populate the existing dataset synthetically in order to utilize the best fit detecting models. Hyperparameter tuning was conducted to set the optimum parameters for training the models. The achieved results show that VGG models outperformed others with an accuracy rate of 99.24%, average precision of 99%, average recall of 99%, average specificity of 99%, and average f1-score of 99% each. The results of the proposed models compared to the other state-of-the-art models in the literature show better performance of the proposed models in terms of accuracy, sensitivity, specificity, and f1-score. Moreover, comparative analysis shows that the proposed models are reliable in that they can be used for detecting BT as well as helping medical practitioners to diagnose BT.
Improving energy efficiency is a major concern in residential buildings for economic prosperity and environmental stability. Despite growing interest in this area, limited research has been conducted to systematically identify the primary factors that influence residential energy efficiency at scale, leaving a significant research gap. This paper addresses the gap by exploring the key determinant factors of energy efficiency in residential properties using a large-scale energy performance certificate dataset. Dimensionality reduction and feature selection techniques were used to pinpoint the key predictors of energy efficiency. The consistent results emphasise the importance of CO2 emissions per floor area, current energy consumption, heating cost current, and CO2 emissions current as primary determinants, alongside factors such as total floor area, lighting cost, and heated rooms. Further, machine learning models revealed that Random Forest, Gradient Boosting, XGBoost, and LightGBM deliver the lowest mean square error scores of 6.305, 6.023, 7.733, 5.477, and 5.575, respectively, and demonstrated the effectiveness of advanced algorithms in forecasting energy performance. These findings provide valuable data-driven insights for stakeholders seeking to enhance energy efficiency in residential buildings. Additionally, a customised machine learning interface was developed to visualise the multifaceted data analyses and model evaluations, promoting informed decision-making.
Advancements in digital technologies have transformed the world by providing more opportunities and possibilities. However, elderly persons have several challenges utilizing modern technology, leading to digital exclusion, which can negatively impact sustainable development. This research attempts to address the current digital exclusion by addressing the challenges older people face considering evolving digital technologies, focusing on economic, social, and environmental sustainability. Three distinct goals are pursued in this study: to perform a detailed literature review to identify gaps in the current understanding of digital exclusion among the elderly, to identify the primary factors affecting digital exclusion in the elderly, and to analyze the patterns and trends in different countries, with a focus on differentiating between High-Income Countries (HICs) and Lower Middle-Income Countries (LMICs). The research strategies used in this study involve a combination of a literature review and a quantitative analysis of the digital exclusion data from five cohorts. This study uses statistical analysis, such as PCA, chi-square test, one-way ANOVA, and two-way ANOVA, to present a complete assessment of the digital issues that older persons experience. The expected results include the identification of factors influencing the digital divide and an enhanced awareness of how digital exclusion varies among different socio-economic and cultural settings. The data used in this study were obtained from five separate cohorts over a five-year period from 2019 to 2023. These cohorts include ELSA (UK), SHARE (Austria, Germany, France, Estonia, Bulgaria, and Romania), LASI (India), MHAS (Mexico), and ELSI (Brazil). It was discovered that the digital exclusion rate differs significantly across HICs and LMICs, with the UK having the fewest (11%) and India having the most (91%) digitally excluded people. It was discovered that three primary factors, including socio-economic status, health-related issues, and age-related limitations, are causing digital exclusion among the elderly, irrespective of the income level of the country. Further analysis showed that the country type has a significant influence on the digital exclusion rates among the elderly, and age group plays an important role in digital exclusion. Additionally, significant variations were observed in the life satisfaction of digitally excluded people within HICs and LMICs. The interaction between country type and digital exclusion also showed a major influence on the health rating. This study has a broad impact since it not only contributes to what we know academically about digital exclusion but also has practical applications for communities. By investigating the barriers that prevent older people from adopting digital technologies, this study will assist in developing better policies and community activities to help them make use of the benefits of the digital era, making societies more equitable and connected. This paper provides detailed insight into intergenerational equity, which is vital for the embedding principles of sustainable development. Furthermore, it makes a strong case for digital inclusion to be part of broader efforts (and policies) for creating sustainable societies.
The energy sector plays a vital role in driving environmental and social advancements. Accurately predicting energy demand across various time frames offers numerous benefits, such as facilitating a sustainable transition and planning of energy resources. This research focuses on predicting energy consumption using three individual models: Prophet, eXtreme Gradient Boosting (XGBoost), and long short-term memory (LSTM). Additionally, it proposes an ensemble model that combines the predictions from all three to enhance overall accuracy. This approach aims to leverage the strengths of each model for better prediction performance. We examine the accuracy of an ensemble model using Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE) through means of resource allocation. The research investigates the use of real data from smart meters gathered from 5567 London residences as part of the UK Power Networks-led Low Carbon London project from the London Datastore. The performance of each individual model was recorded as follows: 62.96% for the Prophet model, 70.37% for LSTM, and 66.66% for XGBoost. In contrast, the proposed ensemble model, which combines LSTM, Prophet, and XGBoost, achieved an impressive accuracy of 81.48%, surpassing the individual models. The findings of this study indicate that the proposed model enhances energy efficiency and supports the transition towards a sustainable energy future. Consequently, it can accurately forecast the maximum loads of distribution networks for London households. In addition, this work contributes to the improvement of load forecasting for distribution networks, which can guide higher authorities in developing sustainable energy consumption plans.
Current teaching
- BSc (Hons) Data Science
- BSc (Hons) Computer Science
- MSc Advanced Computer Science
{"nodes": [{"id": "29565","name": "Dr Farrukh Saleem","jobtitle": "Senior Lecturer","profileimage": "/-/media/images/staff/dr-farrukh-saleem.png","profilelink": "/staff/dr-farrukh-saleem/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "44","numberofcollaborations": "44"},{"id": "19660","name": "Dr Akbar Sheikh Akbari","jobtitle": "Reader","profileimage": "/-/media/images/staff/lbu-approved/beec/akbar-sheikh-akbari.jpg","profilelink": "/staff/dr-akbar-sheikh-akbari/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "141","numberofcollaborations": "4"},{"id": "29683","name": "Dr Shitharth Selvarajan","jobtitle": "Senior Lecturer/Lecturer","profileimage": "/-/media/images/staff/default.jpg","profilelink": "none","department": "School Of Built Environment, Engineering And Computing","numberofpublications": "216","numberofcollaborations": "3"},{"id": "30574","name": "Hafiz Muhammad Shakeel","jobtitle": "Research Officer","profileimage": "/-/media/images/staff/default.jpg","profilelink": "none","department": "School of Built Environment, Engineering and Computing","numberofpublications": "22","numberofcollaborations": "1"},{"id": "10777","name": "Kiran Voderhobli","jobtitle": "School Director of Partnerships and Global Engagement","profileimage": "/-/media/images/staff/kiran-voderhobli.jpg","profilelink": "/staff/kiran-voderhobli/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "8","numberofcollaborations": "1"},{"id": "31509","name": "Niraj Buyo","jobtitle": "Part-Time Lecturer","profileimage": "/-/media/images/staff/niraj-buyo.png","profilelink": "/staff/niraj-buyo/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "1","numberofcollaborations": "1"}],"links": [{"source": "29565","target": "19660"},{"source": "29565","target": "29683"},{"source": "29565","target": "30574"},{"source": "29565","target": "10777"},{"source": "29565","target": "31509"}]}
Dr Farrukh Saleem
29565
