Publications by Type: Journal Article

2020
Zoubeidi, Merouane, et al. 2020. “A new approach agent-based for distributing association rules by business to improve decision process in ERP systems”. International Journal of Information and Decision Sciences 12 (1). Publisher's Version Abstract

Nowadays, the distributed computing plays an important role in the data mining process. To make systems scalable it is important to develop mechanisms that distribute the workload among several sites in a flexible way. Moreover, the acronym ERP refers to the systems and software packages used by organisations to manage day-by-day business activities. ERP systems are designed for the defined schema that usually has a common database. In this paper, we present a collaborative multi-agent based system for association rules mining from distributed databases. In our proposed approach, we combine the multi-agent system with association rules as a data mining technique to build a model that can execute the association rules mining in a parallel and distributed way from the centralised ERP database. The autonomous agents used to provide a generic and scalable platform. This will help business decision-makers to take the right decisions and provide a perfect response time using multi-agent system. The platform has been compared with the classic association rules algorithms and has proved to be more efficient and more scalable.

Abdelhadi, Adel, Leila-Hayet Mouss, and Ouahab Kadri. 2020. “HYBRID MULTI-AGENT AND IMMUNE ALGORITHM APPROACH TO HYBRID FLOW SHOPS SCHEDULING WITH SDST”. https://www.ajme.ro/PDF_AJME_2020_3/L15.pdf 18 (3). Publisher's Version Abstract

The existing literature on process scheduling issues have either ignored installation times or assumed that installation times on all machines is free by association with the task sequence. This working arrangement addresses hybrid flow shop scheduling issues under which there are sequence-dependent configuration times referred to as HFS with SDST. This family of production systems are common in industries such as biological printed circuit boards, metallurgy and vehicles and automobiles making. Due to the increasing complexity of industrialized sectors, simple planning systems have failed to create a realistic industrial scheduling. Therefore, a hybrid multi-agent and immune algorithm can be used as an alternative approach to solve complex problems and produce an efficient industrial schedule in a timely manner. We propose in this paper a multi-agent and immune hybrid algorithms for scheduling HFS with SDST. The findings of this paper suggest that the proposed algorithm outperforms some of the existing ones including PSO (particle swarm optimization), GA (Genetic Algorithm), LSA (Local Search Algorithm) and NEHH (Nawaz Enscore and Ham).

This work presents the prediction of the rate of progression in oil drilling based on random forest algorithm, which is part of the family of ensemble machine learning. The ROP parameter plays a very important role in oil drilling, which has a great impact on drilling costs, and its prediction allows drilling engineers to choose the best combination of input parameters for better progress in drilling operations. To resolve this problem, several works have been realized with the different modeling techniques as machine learning: RNAs, Bayesian networks, SVM etc. The random forest algorithm chosen for our model is better than the other MLS techniques. in speed or precision, following what we found in the literature and tests done with the open source machine learning tool on historical oil drilling logs from fields of Hassi Terfa located in southern Algeria.

Bellal, Salah-Eddine, et al. 2020. “User behaviour-based approach to define mobility devices needs of disabled person in Algeria: a questionnaire study”. Disability and Rehabilitation: Assistive Technology 17 (4) : 453-461. Publisher's Version Abstract

This article showcases the adaptability of existing mobility devices for the Algerian disabled population. It aims to develop a behavior model of disabled Algerian persons through (1) development of a theoretical model based on literature review and (2) improvement of this model by using local collected data from our developed questionnaire.

Bencherif, Fateh, and Leila-Hayet Mouss. 2020. “Complex network to enhance characterization analysis in modelling product development process”. African Journal of Science, Technology, Innovation and Development 21 (7) : 797-811. Publisher's Version Abstract

Nowadays, successful and innovative product development is highly correlated with the company's success and reason for existence. A development process is a major factor influencing cost, timing and quality of product development. It requires additional attention to decisions made about programme, budget, technical and market risks. In this paper a product development process model is proposed in an innovation context and strategy framework of design process and project management. The process modelling is complex network theory based, to improve characterization analysis for product development process modelling. Required concepts for complex process are established to build product development mathematical model, and provide an overview of key definitions and complex networks advanced tools. Finally, a case study for an Algerian electric generator company is carried out to prove the practicality of the proposed model.

Benaggoune, Khaled, et al. 2020. “Holonic agent-based approach for system-level remaining useful life estimation with stochastic dependence ”. International Journal of Computer Integrated Manufacturing 33 (10). Publisher's Version Abstract

The emerging behavior in complex systems is more complicated than the sum of the behaviors of their constituent parts. This behavior involves the propagation of faults between the parts and requires information about how the parts are related. Therefore, the prognostic function at the system-level becomes a very tough task. Conventional approaches focus on identifying faults and their probabilities of occurrence. In complex systems, this can create statistical limitations for prognostic function where component fault relies on the connected components in the system and their state of degradations. In this paper, a new Holonic agent-based approach is proposed for system-level remaining useful life (S-RUL) estimation with different dependencies. As the proposed approach can capture fault/failure mode propagation and interactions that occur in the system all the way up through the component and eventually system level, it can work as an automatic testing-tool in reliability tasks. Through a numerical example, the implementation is done in Java Agent Development Environment with and without consideration of stochastic dependence. Results show that the indirect effect of influencing components has a massive impact on the S-RUL, and the impact of stochastic dependencies should not be ignored, especially in the early stages of the system design.

Bouzenita, Mohammed, et al. 2020. “New fusion and selection approaches for estimating the remaining useful life using Gaussian process regression and induced ordered weighted averaging operators”. Quality and Reliability Engenieering International Journal (QREIJ) 36 (6) : 2146-2169. Publisher's Version Abstract

In this paper, we propose new fusion and selection approaches to accurately predict the remaining useful life. The fusion scheme is built upon the combination of outcomes delivered by an ensemble of Gaussian process regression models. Each regressor is characterized by its own covariance function and initial hyperparameters. In this context, we adopt the induced ordered weighted averaging as a fusion tool to achieve such combination. Two additional fusion techniques based on the simple averaging and the ordered weighted averaging operators besides a selection approach are implemented. The differences between adjacent elements of the raw data are used for training instead of the original values. Experimental results conducted on lithium-ion battery data report a significant improvement in the obtained results. This work may provide some insights regarding the development of efficient intelligent fusion alternatives for further prognostic advances.

The efficient data investigation for fast and accurate remaining useful life prediction of aircraft engines can be considered as a very important task for maintenance operations. In this context, the key issue is how an appropriate investigation can be conducted for the extraction of important information from data-driven sequences in high dimensional space in order to guarantee a reliable conclusion. In this paper, a new data-driven learning scheme based on an online sequential extreme learning machine algorithm is proposed for remaining useful life prediction. Firstly, a new feature mapping technique based on stacked autoencoders is proposed to enhance features representations through an accurate reconstruction. In addition, to attempt into addressing dynamic programming based on environmental feedback, a new dynamic forgetting function based on the temporal difference of recursive learning is introduced to enhance dynamic tracking ability of newly coming data. Moreover, a new updated selection strategy was developed in order to discard the unwanted data sequences and to ensure the convergence of the training model parameters to their appropriate values. The proposed approach is validated on the C-MAPSS dataset where experimental results confirm that it yields satisfactory accuracy and efficiency of the prediction model compared to other existing methods.

Zermane, Hanane, and Rached Kasmi. 2020. “Intelligent Industrial Process Control Based on Fuzzy Logic and Machine Learning”. International Journal of Fuzzy System Applications (IJFSA) 9 (1). Publisher's Version Abstract

Manufacturing automation is a double-edged sword, on one hand, it increases productivity of production system, cost reduction, reliability, etc. However, on the other hand it increases the complexity of the system. This has led to the need of efficient solutions such as artificial techniques. Data and experiences are extracted from experts that usually rely on common sense when they solve problems. They also use vague and ambiguous terms. However, knowledge engineer would have difficulties providing a computer with the same level of understanding. To resolve this situation, this article proposed fuzzy logic to know how the authors can represent expert knowledge that uses fuzzy terms in supervising complex industrial processes as a first step. As a second step, adopting one of the powerful techniques of machine learning, which is Support Vector Machine (SVM), the authors want to classify data to determine state of the supervision system and learn how to supervise the process preserving habitual linguistic used by operators.

Rezki, Djamil, et al. 2020. “Rate of Penetration (ROP) Prediction in Oil Drilling Based on Ensemble Machine Learning”. Lecture Notes in Information Systems and Organisation.
2019
Haoues, Mohammed, Mohammed Dahane, and Nadia-Kinza Mouss. 2019. “Optimization of single outsourcer–single subcontractor outsourcing relationship under reliability and maintenance constraints”. Journal of Industrial Engineering International 15 : pages395–409. Publisher's Version Abstract

In this paper, we focus on outsourcing activities optimization problem in single period setting. In some situations, capacity planning or outsourcing is a one-time event and can be modeled as a single period problem. The aim of this research is to balance the trade-off between two echelons of a supply chain consisting of a single outsourcer and a single subcontractor. Each part is composed of a failure-prone single machine that produces one product type to satisfy market requirements. The outsourcer’s manufacturing system is not able to satisfy the demand; in this case, outsourcing is allowed to recover the lack of capacity. We consider that the subcontractor can satisfy the demands of strategic clients and rent his machine for the outsourcer under a win–win partnership contract. We assume that the hazard failure rate depends on time and the adopted manufacture rate. When unforeseen failures occur, minimal repairs are implemented. Overhaul can be performed to reduce the degradation effects. Hence, we develop a mathematical model to define a profitability interval so that both parties of supply chain can be considered as winners. We seek to determine the contract parameters that suit both parties (duration, start and end dates, the production and outsourcing rates). Then, we develop an exact algorithm to solve the problem of single period optimization, which offers a better execution time through a series of test problems. Finally, we consider a sensitivity analysis based on outsourcing parameters (cost, periodicities, etc) to analyze their effects on partial costs and individual profit of each part, as well as the total profit generated by the system.

Haoues, Mohammed, Mohammed Dahane, and Nadia-Kenza Mouss. 2019. “Outsourcing optimization in two-echelon supply chain network under integrated production-maintenance constraints”. Journal of Intelligent Manufacturing 30 : 701–725. Publisher's Version Abstract

In this paper, we study a two-echelon supply chain network consisting of multi-outsourcers and multi-subcontractors. Each one is composed of a failure-prone production unit that produces a single product to fulfil market demands with variable production rates. Sometimes the manufacturing systems are not able to satisfy demand; in this case, outsourcing option is adopted to improve the limited in-house production capacity. The outsourcing is not justified by the production lack of manufacturing systems, but is also considered for the costs minimization issues. In the considered problem, we assume that the failure rate is dependent on the time and production rate. Preventive maintenance activities can be conducted to mitigate the deterioration effects, and minimal repairs are performed when unplanned failures occurs. We consider that the production cost depends on the rate of the machine utilization. The aim of this research is to propose a joint policy based on a mixed integer programming formulation to balance the trade-off between two-echelon of supply chain. We seek to assist outsourcers to determine the integrated in-house/ outsourcing, and maintenance plans, and the subcontractors to determine the integrated production-maintenance plans so that the benefit of the supply chain is maximized over a finite planning horizon. We develop an improved optimization procedure based on the genetic algorithms, and we discuss and conduct computational experiments to study the managerial insights for the developed framework.

Ourlis, Lazhar, and Djamel Bellala. 2019. “SIMD Implementation of the Aho-Corasick Algorithm using Intel AVX2”. Scalable Computing: Practice and Experience 20 (3). Publisher's Version Abstract

The Aho-Corasick (AC) algorithm is a multiple pattern exact string-matching algorithm proposed by Alfred V. Aho and Margaret J. Corasick. It is used to locate all occurrences of a finite set of patterns within an input text simultaneously. The AC algorithm is in the heart of many applications including digital forensics such as digital signatures demonstrating the authenticity of a digital message or document, full text search (utility programs such as grep, awk and sed of Unix systems), information retrieval (biological sequence analysis and gene identification), intrusion detection systems (IDS) in computer networks like SNORT, web filtering, spam filters, and antimalware solutions (virus scanner). In this paper we present a vectorized version of the AC algorithm designed with the use of packed instructions based on the Intel streaming SIMD (Single Instruction Multiple Data) extensions AVX2 (Advanced Vector Extensions 2.0) technology. This paper shows that the vectorized AC algorithm reduces significantly the time matching process comparing to the implementation of the original AC algorithm.

Hamza, Zerrouki, and Smadi Hacene. 2019. “Reliability and safety analysis using fault tree and Bayesian networks”. International Journal of Computer Aided Engineering and Technology 11 (1). Publisher's Version Abstract

Fault tree analysis (FTA) is one of the most prominent techniques used in risk analysis, this method aimed to identify how component failures lead to system failure using logical gates (i.e. AND, OR gates). However, some limitations appear on FTA due to its static structure. Bayesian networks (BNs) have become a popular technique used in reliability analysis; it represents a set of random variables and their conditional dependencies. This paper discusses the advantages of Bayesian networks over fault tree in reliability and safety analysis. Also, it shows the ability of BN to update probabilities, to represent multi-state variables, dependent failures, and common cause failure. An example taken from the literature is used to illustrate the application and compare the results of both fault tree and bayesian networks techniques.

Soltani, Mohyiddine, Hichem Aouag, and Mohamed-Djamel Mouss. 2019. “An integrated framework using VSM, AHP and TOPSIS for simplifying the sustainability improvement process in a complex manufacturing process”. Journal of Engineering, Design and Technology 18 (1). Publisher's Version Abstract

Purpose

The purpose of this paper is to propose an integrated approach for assessing the sustainability of production and simplifying the improvement tasks in complex manufacturing processes.

Design/methodology/approach

The proposed approach has been investigated the integration of value stream mapping (VSM), analytic hierarchy process (AHP) and technique for order preference by similarity to ideal solution (TOPSIS). VSM is used as a basic structure for assessing and improving the sustainability of the manufacturing process. AHP is used for weighting the sustainability indicators and TOPSIS for prioritizing the operations of a manufacturing process regarding the improvement side.

Findings

The results carried out from this study help the managers’ staff in organizing the improvement phase in the complex manufacturing processes through computing the importance degree of each indicator and determining the most influential operations on the production.

Research limitations/implications

The major limitations of this paper are that one case study was considered. In addition, to an average set of sustainability indicators that have been treated.

Originality/value

The novelty of this research is expressed by the development of an extended VSM in complex manufacturing processes. In addition, the proposed approach contributes with a new improvement strategy through integrating the multi-criteria decision approaches with VSM method to solve the complexity of the improvement process from sustainability viewpoints.

Zerari, Naima, et al. 2019. “Bidirectional deep architecture for Arabic speech recognition”. Open Computer Science 9 (1). Publisher's Version Abstract

Nowadays, the real life constraints necessitatescontrolling modern machines using human interventionby means of sensorial organs. The voice is one of the hu-man senses that can control/monitor modern interfaces.In this context, Automatic Speech Recognition is princi-pally used to convert natural voice into computer text aswell as to perform an action based on the instructionsgiven by the human. In this paper, we propose a generalframework for Arabic speech recognition that uses LongShort-Term Memory (LSTM) and Neural Network (Multi-Layer Perceptron: MLP) classifier to cope with the non-uniform sequence length of the speech utterances issuedfrom both feature extraction techniques, (1) Mel FrequencyCepstral Coefficients MFCC (static and dynamic features),(2) the Filter Banks (FB) coefficients. The neural architec-ture can recognize the isolated Arabic speech via classifi-cation technique. The proposed system involves, first, ex-tracting pertinent features from the natural speech signalusing MFCC (static and dynamic features) and FB. Next,the extracted features are padded in order to deal with thenon-uniformity of the sequences length. Then, a deep ar-chitecture represented by a recurrent LSTM or GRU (GatedRecurrent Unit) architectures are used to encode the se-quences of MFCC/FB features as a fixed size vector that willbe introduced to a Multi-Layer Perceptron network (MLP)to perform the classification (recognition). The proposedsystem is assessed using two different databases, the firstone concerns the spoken digit recognition where a com-parison with other related works in the literature is per-formed, whereas the second one contains the spoken TVcommands. The obtained results show the superiority ofthe proposed approach.

Zerari, Naima, et al. 2019. “Bidirectional deep architecture for Arabic speech recognition”. Open Computer Science 9 : 92-102. Publisher's Version Abstract

Nowadays, the real life constraints necessitatescontrolling modern machines using human interventionby means of sensorial organs. The voice is one of the hu-man senses that can control/monitor modern interfaces.In this context, Automatic Speech Recognition is princi-pally used to convert natural voice into computer text aswell as to perform an action based on the instructionsgiven by the human. In this paper, we propose a generalframework for Arabic speech recognition that uses LongShort-Term Memory (LSTM) and Neural Network (Multi-Layer Perceptron: MLP) classifier to cope with the non-uniform sequence length of the speech utterances issuedfrom both feature extraction techniques, (1) Mel FrequencyCepstral Coefficients MFCC (static and dynamic features),(2) the Filter Banks (FB) coefficients. The neural architec-ture can recognize the isolated Arabic speech via classifi-cation technique. The proposed system involves, first, ex-tracting pertinent features from the natural speech signalusing MFCC (static and dynamic features) and FB. Next,the extracted features are padded in order to deal with thenon-uniformity of the sequences length. Then, a deep ar-chitecture represented by a recurrent LSTM or GRU (GatedRecurrent Unit) architectures are used to encode the se-quences of MFCC/FB features as a fixed size vector that willbe introduced to a Multi-Layer Perceptron network (MLP)to perform the classification (recognition). The proposedsystem is assessed using two different databases, the firstone concerns the spoken digit recognition where a com-parison with other related works in the literature is per-formed, whereas the second one contains the spoken TVcommands. The obtained results show the superiority ofthe proposed approach.

Bouzgou, Hassen, and Christian A Gueymard. 2019. “Fast short-term global solar irradiance forecasting with wrapper mutual information”. Renewable Energy 133 : 1055-1065. Publisher's Version Abstract

Accurate solar irradiance forecasts are now key to successfully integrate the (variable) production from large solar energy systems into the electricity grid. This paper describes a wrapper forecasting methodology for irradiance time series that combines mutual information and an Extreme Learning Machine (ELM), with application to short forecast horizons between 5-min and 3-h ahead. The method is referred to as Wrapper Mutual Information Methodology (WMIM). To evaluate the proposed approach, its performance is compared to that of three dimensionality reduction scenarios: full space (latest 50 variables), partial space (latest 5 variables), and the usual Principal Component Analysis (PCA). Based on measured irradiance data from two arid sites (Madina and Tamanrasset), the present results reveal that the reduction of the historical input space increases the forecasting performance of global solar radiation. In the case of Madina and forecast horizons from 5-min to 30-min ahead, the WMIM forecasts have a better coefficient of determination (R2 between 0.927 and 0.967) than those using the next best performing strategy, PCA (R2 between 0.921 and 0.959). The Mean Absolute Percentage Error (MAP) is also better for WMIM [7.4–10.77] than for PCA [8.4–11.55]. In the case of Tamanrasset and forecasting horizons from 1-h to 3-h ahead, the WMIM forecasts have an R2 between 0.883 and 0.957, slightly better than the next best performing strategy (PCA) (R2 between 0.873 and 0.910). The Normalized Mean Squared Error (NMSE) is similarly better for WMIM [0.048–0.128] than for PCA [0.105–0.130]. It is also found that the ELM technique is considerably more computationally efficient than the more conventional Multi Layer Perceptron (MLP). It is concluded that the proposed mutual information-based variable selection method has the potential to outperform various other proposed techniques in terms of prediction performance.

Zemouri, Nahed, Hassen Bouzgou, and Christian A. Gueymard. 2019. “Multimodel ensemble approach for hourly global solar irradiation forecasting”. The European Physical Journal Plus 134. Publisher's Version Abstract

This contribution proposes a novel solar time series forecasting approach based on multimodel statistical ensembles to predict global horizontal irradiance (GHI) in short-term horizons (up to 1 hour ahead). The goal of the proposed methodology is to exploit the diversity of a set of dissimilar predictors with the purpose of increasing the accuracy of the forecasting process. The performance of a specific multimodel ensemble forecast showing an improved forecast skill is demonstrated and compared to a variety of individual single models. The proposed system can be applied in two distinct ways. The first one incorporates the forecasts acquired from the different forecasting models constituting the ensemble via a linear combination (combination-based). The other one consists of a novel methodology that delivers as output the forecast provided by the specific model (involved in the ensemble) that delivers the maximum precision in the zone of the variable space connected with the considered GHI time series (selection-based approach). This forecasting model is issued from an appropriate division of the variable space. The efficiency of the proposed methodology has been evaluated using high-quality measurements carried out at 1min intervals at four radiometric sites representing widely different radiative climates (Arid, Temperate, Tropical, and High Albedo). The obtained results emphasize that, at all sites, the proposed multi-model ensemble is able to increase the accuracy of the forecasting process using the different combination approaches, with a significant performance improvement when using the classification strategy.

The traditional detection methods have the disadvantages of radiation exposure, high cost, and shortage of medical resources, which restrict the popularity of early screening for breast cancer. An inexpensive, accessible, and friendly way to detect is urgently needed. Infrared thermography, an emerging means to breast cancer detection, is extremely sensitive to tissue abnormalities caused by inflammation and vascular proliferation. In this work, combined with the temperature and texture features, we designed a breast cancer detection system based on smart phone with infrared camera, achieving the accuracy of 99.21 % with the k-Nearest Neighbor classifier. We compared the diagnostic results of the low resolution, originated from the phone camera, with the high resolution of the conventional infrared camera. It was found that the accuracy and sensitivity decreased slightly, but both of them were over than 98 %. The proposed breast cancer detection system not only has excellent performance but also dramatically saves the detection cost, and its prospect will be fascinating.

Pages