European Journal of Computer Science and Information Technology (EJCSIT)

EA Journals

machine learning

Data-Driven Framework for Crop Categorization using Random Forest-Based Approach for Precision Farming Optimization (Published)

Making incorrect choices when selecting crops can result in substantial financial losses for farmers, primarily because of a limited understanding of the unique needs of each crop. Each farm possesses unique characteristics, influencing the effectiveness of modern agricultural solutions. Challenges persist in optimizing farming methods to maximize yield. This study aims to mitigate these issues by developing a data-driven crop classification and cultivation advisory system, leveraging machine learning algorithms and agricultural data. By analysing variables such as soil nutrient levels, temperature, humidity, pH, and rainfall, the system offers tailored recommendations for crop selection and cultivation practices. This approach optimizes resource utilization, enhances crop productivity, and promotes sustainable agriculture. The study emphasizes the importance of pre-processing data, such as handling missing values and normalizing features, to ensure reliable model training. Various machine learning models, including Random Forests, Bagging Classifier, and AdaBoost Classifier, were employed, demonstrating high accuracy rates in crop classification tasks. The integration of real-time weather data, market prices, and profitability analysis further refines decision-making, while a mobile application facilitates convenient access for farmers. By incorporating user feedback and continuous data collection, the system’s performance can be continuously improved, offering precise and economically viable agricultural advice.


Keywords: Random Forest, crop classification, cultivation advisory, machine learning, precision farming.

A Comparison of Two Machine Learning Techniques for the Prediction of Initial Oil in Place in the Niger Delta Region (Published)

Conventionally, the knowledge of experts on the drilling features of a potential oil well is practically used to predict the volume of initial oil in place. Experts used different knowledge-based models such as volumetric, material balancing, analogy to predict the initial oil in place. In this study, 816 datasets were collected from Shell petroleum development company (SPDC) where the volumetric method is used for their prediction. These datasets were preprocessed and applied on two machine learning techniques of random forest and supervised vector regressor to predict the initial oil in place and the results obtained were compared with that obtained from SPDC.The results of computation using 4 principal features from the 9 features were closer to that obtained from SPDC than the computations using all the 9 features. The results of computations with random forest were also compared with that of supervised vector regressor. The results of random forest covary strongly (0.970) with the field results more than that of the support vector regressor (0.832). The uniqueness of this study is shown in the use of 4 predicting features (independent variables) to obtain prediction values that are very close to that obtained in the field with 9 features. This is obtained with random forest, so it can be recommended as a reliable machine technique for the prediction of initial oil in place in the Niger delta region.

Keywords: Analogy, initial-oil-in-place. Niger Delta., machine learning, material balance, randomforest, support vector regressor, volumetric

Assessing the Predictive Capability of a Machine Learning Model (Published)

The purpose of this study was to evaluate the effectiveness of an integrated machine learning system that has been put into place to help professionals predict how patients will respond to steroid treatment for glaucoma. The research employed a quantitative research methodology, utilizing descriptive statistics. Taro Yamane formula was applied in finding a suitable population size. Our study employed linear regression analysis to establish the correlation between the predictors, i.e the novel predicting system, and the dependent variable, which pertains to the effectiveness of forecasting a patient’s reaction to steroid treatment. The analysis showed that implementing a novel prediction technique would have a notable impact and efficiency in determining a persons status in pre-trabeculectomy evaluation. The p-value (0.000), which is less than the predefined significance level (Alpha) of 0.05—more specifically, 0.000<0.05—indicates the evidence for a significant finding. The calculated t-value (33.196) exceeds the critical t-value (1.960). Consequently, the correlation coefficient (R) of 0.920 demonstrates a highly robust positive effect.


Keywords: glaucoma treatment, machine learning, trabeculectomy

An Investigation of Translation of Text Language to Sign Language Using Machine Learning (Published)

Among the fastest-growing areas of study today is the translation of sign language, which is the most natural form of communication for those with hearing loss. Deaf individuals may be able to communicate with hearing people directly, without an intermediary, with the use of a hand gesture recognition device. The method was developed to facilitate the automatic translation of American Sign Language into text and sound. The suggested system uses a large data collection to interpret individual words and phrases in traditional American Sign Language, alleviating any fears that the user may have about using a virtual camera. Deaf and mute persons must rely on Sign Language as their only method of communication. However, a large portion of the general public is illiterate in sign language. Therefore, those who sign have a more difficult time communicating with those who don’t without the help of an interpreter. The proposed technique employs data collected by Fifth Dimension Technologies (5DT) gloves to try to interpret hand movements as spoken language. The data has been classified into text words using a variety of machine learning techniques, including neural networks, decision tree classifiers, and k-nearest neighbors.

Citation: Alghamdi D.N.  (2022) An Investigation of Translation of Text Language to Sign Language Using Machine Learning, European Journal of Computer Science and Information Technology, Vol.10, No.5, pp.41-52

Keywords: Sign Language, Translator, classify, machine learning, text language, translation

A model for Real Estate Price Prediction using Multi-Level Stacking Ensemble Technique (Published)

Recent research and economic publications have shown the impact of real estate investment on the over economy of Nigeria. It is therefore crucial to employ machine learning technique to predict the price for real estate properties. Real estate price analysis and prediction will assist in establishment of real estate policies and can also be used to aid real estate property stakeholders to come up with informative decisions without bias or prejudice. Thus, it is imperative to develop a model to improve the accuracy of real estate price prediction. The goal of this research is to develop a model using a multi-level stacking ensemble technique to predict price of real estate property. The dataset utilized for the study was collected from transactions done by real estate firms in Port Harcourt and it consist of a total of 1053 rows with twelve features. The base model used includes Random Forest(RF), Extreme Gradient Boosting Algorithm(XGBoost), Light Gradient Boosting Machine(LightGBM), Decision Tree regression and ElasticNet Regression. Various combinations of the base models were stacked using StackingCVRegressor. The final model was developed by combining the best performing stacked models and evaluated using R-Square, Mean Absolute Error(MAE), Root Mean Square Error(RMSE), Mean Square Error(MSE) and Training time. The proposed model outperformed the various individual base model with R-square of 0.985203, MSE of 0.013438, RMSE of 0.115923, MAE of 0.063411 and training time of 0.599398. The result show that multi-level stacking significant improve the accuracy of a model. Again, it was observed stacking improve the performance accuracy of a model at the cost of computational time. Stacking by using blending function for the proposed model significantly reduced the computational time for training the model to 0.599398 second when compared to using StackingCVRegressor with training time of 107.054931 seconds. Therefore, multi-level stacking ensemble technique can be employed to improve the predictive accuracy of a prediction model. Future work can be done by increasing the dataset and also increasing the number of features.

Citation: Nnadozie L, Matthias D., and  Bennett E.O. (2022)  A model for Real Estate Price Prediction using Multi-Level Stacking Ensemble Technique, European Journal of Computer Science and Information Technology, Vol.10, No.3, pp.33-46


Keywords: Extreme Gradient Boosting Algorithm(XGBoost), Multi-level Stacking Ensemble Technique, Random Forest, Real Estate, machine learning

The Importance of Machine Learning Techniques in Malware Detection: A Survey (Published)

In the current age, keeping pace with the evolution of malware is becoming immensely challenging each day. In order to keep up with the unconventional trend in the development of malware, it is imperative to develop intelligent malware detection methods that accurately identify malicious files from real world data samples. The sheer complexity and volume of malware attacks on a day-to-day basis has given rise to the need of utilising machine learning techniques for dynamic analysis of files and data. In this paper, types of malware are described to understand the scope of the problem and the traditional techniques that are used for malware detection. Dynamic and behaviour-based detection methods coupled with machine learning techniques are considered to be at the core of future research and progress. Unfortunately, there are still a plethora of problems and challenges to overcome like polymorphic malware, black-box models of machine learning algorithms, reverse engineering, theoretical and practical research gaps that limit our progress and success. It is crucial to find solutions as malware experts are also exploring and exploiting the concepts of machine learning for advanced malware development and better elusive techniques. Additionally, it is required to bridge the gap between malware and machine learning experts. Their combined expertise can secure better results. In conclusion, future research direction in the field of malware detection is presented.

Keywords: Behaviour-based Detection, Dynamic Malware Analysis, Pattern Recognition, Signature-based detection, Static Malware Analysis., machine learning

Analysis and forecasting the outbreak of Covid-19 in Ethiopia using machine learning (Published)

Coronavirus outbreaks affect human beings as a whole and can be a cause of serious illness and death. Machine learning (ML) models are the most significant function in disease prediction, such as the Covid-19 pandemic, in high-performance forecasting and used to help decision-makers understand future situations. ML algorithms have been used for a long time in many application areas that include recognition and prioritization for certain treatments. Too many ML furcating models are used to deal with problems. In this study, predict a pandemic outbreak using the ML forecasting models. The models are designed to predict Covid-19, depending on the number of confirmed cases, recovered cases and death cases, based on the available dataset. Support Vector Machine (SVM) and Polynomial Regression (PR) models were used for this study to predict Covid-19 ‘s aggressive risk. All three cases, such as confirmed, recovered and death, models predict death in Ethiopia over the next 30 days. The experimental result showed that SVM is doing better than PR to predict the Covid-19 pandemic. According to this report, the pandemic in Ethiopia increased by half between the mid of July 2020. Then Ethiopia will face a number of hospital shortages, and quarantine place.

Keywords: COVID-19, Forecasting, coronavirus, machine learning, polynomial regressing, support vector machine

Machine Learning Based Approach to Simulate Drone Dynamics Related to Figure of Eight Maneuvering Pattern (Published)

Drone will be a commonly use technology used by major portion of the society and simulating a given drone dynamic will be an important requirement.   There are drone dynamic simulation models to simulate popular commercial drones. There are many proposed drone dynamic models with the base of Newtonian and fluid dynamics. Hoverer these models consist of many model parameters and it is impracticable to   evaluate required model parameters to simulate a given drone. If there is a simple mechanism to built machine learning based drone dynamic model to simulate a any given drone then it is address most of the above issues.  Figure of eight maneuvering pattern or its derivative is used in many activities in the domains of aviation, maritime and ground vehicle. Hence, proposed approach and conducted experiments presents the process of developing a machine learning based drone dynamics simulation related to a figure of eight maneuvering pattern

Keywords: Done, Simulation, machine learning, maneuvering

Scroll to Top

Don't miss any Call For Paper update from EA Journals

Fill up the form below and get notified everytime we call for new submissions for our journals.