European Journal of Computer Science and Information Technology (EJCSIT)

EA Journals

machine learning

AI-Powered Robotics and Automation: Innovations, Challenges, and Pathways to the Future (Published)

Artificial Intelligence (AI) has profoundly transformed robotics and auto- mation by enabling unprecedented levels of intelligence, adaptability, and efficiency. This study explores the integration of AI into robotics, focusing on its applications, innovations, and implications for industries ranging from healthcare to manufacturing. From enhancing operational workflows to enabling autonomous decision-making, AI is reshaping how robots interact with humans and their environments. We propose a framework for seamless AI-driven robotics integration, emphasizing advancements in learning algorithms, sensor technologies, and human-robot collaboration. The study also identifies key challenges, including ethical concerns, scalability issues, and re- source constraints, while offering actionable insights and future directions. Results in- dicate significant enhancements in precision, operational efficiency, and decision-mak- ing capabilities, positioning AI-powered robotics as a cornerstone of modern automa- tion. Furthermore, the discussion extends to exploring the role of AI in emerging do- mains, such as swarm robotics, predictive analytics, and soft robotics, offering a for- ward-looking perspective on this transformative field.

Keywords: artificial intelligence, robotics, automation, machine learning, human-robot collaboration, IoT, ethical AI, industrial applications

Keywords: Artificial Intelligence, Automation, IoT, ethical AI, human-robot collaboration, industrial applications, machine learning, robotics

Robust detection of LLM-generated text through transfer learning with pre-trained Distilled BERT model (Published)

Detecting text generated by large language models (LLMs) is a growing challenge as these models produce outputs nearly indistinguishable from human writing. This study explores multiple detection approaches, including a Multi-Layer Perceptron (MLP), Long Short-Term Memory (LSTM) networks, a Transformer block, and a fine-tuned distilled BERT model. Leveraging BERT’s contextual understanding, we train the model on diverse datasets containing authentic and synthetic texts, focusing on features like sentence structure, token distribution, and semantic coherence. The fine-tuned BERT outperforms baseline models, achieving high accuracy and robustness across domains, with superior AUC scores and efficient computation times. By incorporating domain-specific training and adversarial techniques, the model adapts to sophisticated LLM outputs, improving detection precision. These findings underscore the efficacy of pretrained transformer models for ensuring authenticity in digital communication, with potential applications in mitigating misinformation, safeguarding academic integrity, and promoting ethical AI usage.

Keywords: Classifier, GenAI, detection, fine tuning, large language models, machine learning, natural language processing, pretraining

AI vs. AI: The Digital Duel Reshaping Fraud Detection (Published)

In the evolving landscape of financial security, a new battlefront has emerged: synthetic identity fraud powered by Generative Artificial Intelligence (GAI). This paper examines the high-stakes digital duel between fraudsters wielding GAI and the adaptive defense mechanisms of financial institutions. The paper explores how GAI-created synthetic identities challenge traditional fraud detection paradigms with convincing backstories, digital footprints, and AI-generated images. These artificial personas’ unprecedented scale and sophistication threaten to overwhelm existing security infrastructures, potentially compromising the integrity of financial systems and identity verification frameworks. Our analysis reveals large-scale synthetic identity campaigns’ far-reaching economic implications and disruptive potential across multiple sectors. It also investigates cutting-edge countermeasures, including adversarial machine learning, real-time anomaly detection, and multi-modal data analysis techniques. As this technological arms race intensifies, the paper concludes by proposing future research directions and emphasizing the critical need for collaborative initiatives to stay ahead in this ever-evolving digital battlefield.

Keywords: Cybersecurity, Fraud Detection, generative AI, machine learning, synthetic identities

Leveraging ML for Anomaly Detection in Healthcare Data Warehouses (Published)

The rapid emergence of digitalisation leads to unprecedented growth in the generation of the healthcare sector-particularly EHRs and medical equipment data. This extended the way for challenges for integrity in managing data and anomaly detection, including fraudulent transactions, medication errors, and many more system failures. Modern healthcare data poses a challenge to traditional methods of anomaly detection due to high and complex dimensionality. Machine learning provides a strong solution, using algorithms such as Gaussian Mixture Models, One-Class SVM and deep learning algorithms such as Autoencoders, and Recurrent Neural Networks in the detection of anomalies in healthcare data warehouse settings [1]. This study reports how ML can help advance care for patients, enable the validity of the data and reduce costs through real-time monitoring, fraud detection, and early detection of diseases. Applying anomaly detection through ML would most likely bring better operational performance, patient safety, and decision-making in health care for organizations as issues of poor data quality, lack of interpretability of models, and real-time detection would be addressed [2].

Keywords: Operational Efficiency, Patient Safety, anomaly detection, fraud detection healthcare data warehouses, machine learning

Advancements in Robotics Process Automation: A Novel Model with Enhanced Empirical Validation and Theoretical Insights (Published)

Robotics Process Automation (RPA) is revolutionizing business operations by significantly enhancing efficiency, productivity, and operational excellence across various industries. This manuscript delivers a comprehensive review of recent advancements in RPA technologies and proposes a novel model designed to elevate RPA capabilities. Incorporating cutting-edge artificial intelligence (AI) techniques, advanced machine learning algorithms, and strategic integration frameworks, the proposed model aims to push RPA’s boundaries. The paper includes a detailed analysis of functionalities, implementation strategies, and expanded empirical validation through rigorous testing across multiple industries. Theoretical insights underpin the model’s design, offering a robust framework for its application. Limitations of current models are critically discussed, and future research directions are outlined to guide the next wave of RPA innovation. This study offers valuable guidance for practitioners and researchers aiming to advance RPA technology and its applications.

Keywords: Artificial Intelligence, RPA, data integration, machine learning

Real Time Credit Card Fraud Detection and Reporting System Using Machine Learning (Published)

This study addresses the critical issue of real-time credit card fraud detection using machine learning. The primary goal is to develop a model that promptly identifies fraudulent transactions and alerts users. Two algorithms—Random Forest and Decision Tree Classifier were used, alongside various sampling techniques to balance the dataset and enhance performance. Six models were created, each with different accuracy levels in fraud detection. Key findings include a higher incidence of fraud among individuals over 75 years, likely due to less familiarity with modern transaction methods. Additionally, a majority of transactions involved females, indicating a potential higher fraud risk in these transactions. The Random Forest -SMOTE [Hyperparameter Tuned] model was the most effective, achieving a 97% accuracy rate, 95% F1 score, and 98% precision rate. For practical application, this model was integrated with Twilio for real-time fraud alerts, proving successful in sending timely, accurate notifications. The study highlights valuable insights and a robust solution for real-time fraud detection and response. Regular performance evaluations of the model are recommended to maintain its effectiveness against evolving fraud patterns.

Keywords: Algorithm, credit card fraud, machine learning, real-time detection, twilo integration

Data-Driven Framework for Crop Categorization using Random Forest-Based Approach for Precision Farming Optimization (Published)

Making incorrect choices when selecting crops can result in substantial financial losses for farmers, primarily because of a limited understanding of the unique needs of each crop. Each farm possesses unique characteristics, influencing the effectiveness of modern agricultural solutions. Challenges persist in optimizing farming methods to maximize yield. This study aims to mitigate these issues by developing a data-driven crop classification and cultivation advisory system, leveraging machine learning algorithms and agricultural data. By analysing variables such as soil nutrient levels, temperature, humidity, pH, and rainfall, the system offers tailored recommendations for crop selection and cultivation practices. This approach optimizes resource utilization, enhances crop productivity, and promotes sustainable agriculture. The study emphasizes the importance of pre-processing data, such as handling missing values and normalizing features, to ensure reliable model training. Various machine learning models, including Random Forests, Bagging Classifier, and AdaBoost Classifier, were employed, demonstrating high accuracy rates in crop classification tasks. The integration of real-time weather data, market prices, and profitability analysis further refines decision-making, while a mobile application facilitates convenient access for farmers. By incorporating user feedback and continuous data collection, the system’s performance can be continuously improved, offering precise and economically viable agricultural advice.

 

Keywords: Random Forest, crop classification, cultivation advisory, machine learning, precision farming.

A Comparison of Two Machine Learning Techniques for the Prediction of Initial Oil in Place in the Niger Delta Region (Published)

Conventionally, the knowledge of experts on the drilling features of a potential oil well is practically used to predict the volume of initial oil in place. Experts used different knowledge-based models such as volumetric, material balancing, analogy to predict the initial oil in place. In this study, 816 datasets were collected from Shell petroleum development company (SPDC) where the volumetric method is used for their prediction. These datasets were preprocessed and applied on two machine learning techniques of random forest and supervised vector regressor to predict the initial oil in place and the results obtained were compared with that obtained from SPDC.The results of computation using 4 principal features from the 9 features were closer to that obtained from SPDC than the computations using all the 9 features. The results of computations with random forest were also compared with that of supervised vector regressor. The results of random forest covary strongly (0.970) with the field results more than that of the support vector regressor (0.832). The uniqueness of this study is shown in the use of 4 predicting features (independent variables) to obtain prediction values that are very close to that obtained in the field with 9 features. This is obtained with random forest, so it can be recommended as a reliable machine technique for the prediction of initial oil in place in the Niger delta region.

Keywords: Analogy, initial-oil-in-place. Niger Delta., machine learning, material balance, randomforest, support vector regressor, volumetric

Assessing the Predictive Capability of a Machine Learning Model (Published)

The purpose of this study was to evaluate the effectiveness of an integrated machine learning system that has been put into place to help professionals predict how patients will respond to steroid treatment for glaucoma. The research employed a quantitative research methodology, utilizing descriptive statistics. Taro Yamane formula was applied in finding a suitable population size. Our study employed linear regression analysis to establish the correlation between the predictors, i.e the novel predicting system, and the dependent variable, which pertains to the effectiveness of forecasting a patient’s reaction to steroid treatment. The analysis showed that implementing a novel prediction technique would have a notable impact and efficiency in determining a persons status in pre-trabeculectomy evaluation. The p-value (0.000), which is less than the predefined significance level (Alpha) of 0.05—more specifically, 0.000<0.05—indicates the evidence for a significant finding. The calculated t-value (33.196) exceeds the critical t-value (1.960). Consequently, the correlation coefficient (R) of 0.920 demonstrates a highly robust positive effect.

 

Keywords: glaucoma treatment, machine learning, trabeculectomy

An Investigation of Translation of Text Language to Sign Language Using Machine Learning (Published)

Among the fastest-growing areas of study today is the translation of sign language, which is the most natural form of communication for those with hearing loss. Deaf individuals may be able to communicate with hearing people directly, without an intermediary, with the use of a hand gesture recognition device. The method was developed to facilitate the automatic translation of American Sign Language into text and sound. The suggested system uses a large data collection to interpret individual words and phrases in traditional American Sign Language, alleviating any fears that the user may have about using a virtual camera. Deaf and mute persons must rely on Sign Language as their only method of communication. However, a large portion of the general public is illiterate in sign language. Therefore, those who sign have a more difficult time communicating with those who don’t without the help of an interpreter. The proposed technique employs data collected by Fifth Dimension Technologies (5DT) gloves to try to interpret hand movements as spoken language. The data has been classified into text words using a variety of machine learning techniques, including neural networks, decision tree classifiers, and k-nearest neighbors.

Citation: Alghamdi D.N.  (2022) An Investigation of Translation of Text Language to Sign Language Using Machine Learning, European Journal of Computer Science and Information Technology, Vol.10, No.5, pp.41-52

Keywords: Sign Language, Translator, classify, machine learning, text language, translation

Scroll to Top

Don't miss any Call For Paper update from EA Journals

Fill up the form below and get notified everytime we call for new submissions for our journals.