Publications by Type: Journal Article

2023
Benbouzid, Mohamed, and Tarek Berghout. 2023. “Quo Vadis Machine Learning-Based Systems Condition Prognosis?—A Perspective”. Electronics 12 (3) : 527. Publisher's Version Abstract

Data-driven prognostics and health management (PHM) is key to increasing the productivity of industrial processes through accurate maintenance planning. The increasing complexity of the systems themselves, in addition to cyber-physical connectivity, has brought too many challenges for the discipline. As a result, data complexity challenges have been pushed back to include more decentralized learning challenges. In this context, this perspective paper describes these challenges and provides future directions based on a relevant state-of-the-art review.

Lithium-ion (Li-ion) batteries play an important role in providing necessary energy when acting as a main or backup source of electricity. Indeed, the unavailability of battery aging discharge data in most real-world applications makes the State of Health (SoH) assessment very challenging. Alternatively, accelerated aging is therefore adopted to emulate the degradation process and to achieve an SoH estimate. However, accelerated aging generates limited deterioration patterns suffering from a higher level of complexity due to the non-linearity and non-stationarity imposed by harsh conditions. In this context, this paper aims to provide a predictive model capable of solving incomplete data problems by providing two main solutions for each of the problems of complexity and missing patterns, respectively. First, to overcome the problem of lack of patterns, a robust collaborative feature extractor (RCFE) is designed by collaborating between a set of improved restricted Boltzmann machines (I-RBMs) to be able to share learning knowledge among different locally trained I-RBMs to create a more generalized global extraction model. Second, a set of RCFEs is then evolved through a neural network with an augmented hidden layer (NAHL) to enhance the predictive ability by further exploring representation learning to overcome pattern complexity issues. The designed RCFE-NAHL is trained to predict SoH using constant current (CC) discharge characteristics by implying multiple characteristics recorded through the constant voltage (CV) charging process as indicators of health. The proposed SoH prediction approach performances are evaluated on a set of battery life cycles from the well-known NASA database. In this context, the achieved results clearly highlight the higher accuracy and robustness of the proposed learning model.

Advanced technologies, such as the Internet of Things (IoT) and Artificial Intelligence (AI), underpin many of the innovations in Industry 4.0. However, the interconnectivity and open nature of such systems in smart industrial facilities can also be targeted and abused by malicious actors, which reinforces the importance of cyber security. In this paper, we present a secure, decentralized, and Differentially Private (DP) Federated Learning (FL)-based IDS (2DF-IDS), for securing smart industrial facilities. The proposed 2DF-IDS comprises three building blocks, namely: a key exchange protocol (for securing the communicated weights among all peers in the system), a differentially private gradient exchange scheme (achieve improved privacy of the FL approach), and a decentralized FL approach (that mitigates the single point of failure/attack risk associated with the aggregation server in the conventional FL approach). We evaluate our proposed system through detailed experiments using a real-world IoT/IIoT dataset, and the results show that the proposed 2DF-IDS system can identify different types of cyber attacks in an Industrial IoT system with high performance. For instance, the proposed system achieves comparable performance (94.37%) with the centralized learning approach (94.37%) and outperforms the FL-based approach (93.91%) in terms of accuracy. The proposed system is also shown to improve the overall performance by 12%, 13%, and 9% in terms of F1-score, recall, and precision, respectively, under strict privacy settings when compared to other competing FL-based IDS solutions.

Benrabah, Mohamed-Elamine, Ouahab Kadri, and Nadia-Kenza Mouss. 2023. “Faulty Detection System Based on SPC and Machine Learning Techniques”. Revue de l’Intelligence Artificielle : 969-977. Publisher's Version Abstract

Starting from a worrying observation, that companies have difficulties controlling the anomalies of their manufacturing processes, in order to have a better control over them, we have realized a case study on the practical data of the Fertial Complex to analyze the main parameters of the ammonia neutralization by nitric acid process. This article proposes a precise diagnostic of this process to detect dysfunction problems affecting the final product. We start with a general diagnosis of the process using the SPC method, this approach is considered an excellent way to monitor and improve the product quality and provides very useful observations that allowed us to detect the parameters that suffer from problems affecting the quality. After the discovery of the parameters incapable to produce the quality required by the standards, we applies two machine learning technologies dedicated to the type of data of these parameters for detected the anomaly, the first technique called The kernel connectivity-based outlier factor (COF) algorithm consists in recording for each object the degree of being an outlier, the second technique called the Isolation Forest, its principle is to establish a forest to facilitate the calculation and description. The results obtained were compared in order to choose which is the best algorithm to monitor and detect the problems of these parameters, we find that the COF method is more efficient than the isolation forest which leads us to rely on this technology in this kind of process in order to avoid passing a bad quality to the customer in future.

Mehannaoui, Raouf, Kinza-Nadia Mouss, and Karima Aksa. 2023. “IoT-based food traceability system: Architecture, technologies, applications, and future trends”. Food Control 145. Publisher's Version Abstract

An effective Food Traceability System (FTS) in a Food Supply Chain (FSC) should adequately provide all necessary information to the consumer(s), meet the requirements of the relevant agencies, and improve food safety as well as consumer confidence. New information and communication technologies are rapidly advancing, especially after the emergence of the Internet of Things (IoT). Consequently, new food traceability systems have become mainly based on IoT. Many studies have been conducted on food traceability. They mainly focused on the practical implementation and theoretical concepts. Accordingly, various definitions, technologies, and principles have been proposed. The “traceability” concept has been defined in several ways and each new definition has tried to generalize its previous ones. Nevertheless, no standard definition has been reached. Furthermore, the architecture of IoT-based food traceability systems has not yet been standardized. Similarly, used technologies in this field have not been yet well classified. This article presents an analysis of the existing definitions of food traceability, and thus proposes a new one that aims to be simpler, general, and encompassing than the previous ones. We also propose, through this article, a new architecture for IoT-based food traceability systems as well as a new classification of technologies used in this context. We do not miss discussing the applications of different technologies and future trends in the field of IoT-based food traceability systems. Mainly, an FTS can make use of three types of technologies: Identification and Monitoring Technologies (IMT), Communication Technologies (CT), and Data Management Technologies (DMT). Improving a food traceability system requires the use of the best new technologies. There is a variety of promising technologies today to enhance FTS, such as fifth-generation (5G) mobile communication systems and distributed ledger technology (DLT).

The growth of manufacturing industries and the huge competitive environment forced manufacturing organizations to develop advanced improvement strategies and enhance their sustainability performance. The integration of sustainable Manufacturing in industrial operations leads to enhanced process performances through the reduction of wastes, cost, and environmental impacts and satisfies ergonomic conditions. For this reason, various firms have adopted sustainable manufacturing concepts to enhance their performances and hold a prestigious competitive position. The purpose of this research is to develop an integrated Pythagorean Fuzzy MCDM model to enhance the application process of the conventional Lean Manufacturing approach (LM). Firstly, an extended Value Steam Mapping is proposed to assess the sustainability of the manufacturing process and identify the causes of waste from a sustainability viewpoint. Secondly, Pythagorean Fuzzy Decision-Making Trial And Evaluation Laboratory (PF-DEMATEL) is employed to analyze the interrelationship among the identified. Thirdly, Pythagorean Fuzzy Technique for Order Preference by Similarity to Ideal Solution (PF-TOPSIS) is introduced to prioritize a set of solutions in order to overcome the investigated causes and improve the durability of the manufacturing operations. Finally, sensitivity analysis is conduced to assess the effectiveness of the obtained results. The proposed method has several attractive features. It can address the drawbacks of the conventional LM and enhance its analysis and improvement tasks. However, the proposed approach offers an advanced application process for Lean Manufacturing in a sustainability context. Additionally, the suggested strategy facilitates the leaders to assess the current state of the manufacturing processes and select the appropriate solutions for successful sustainability implementation. The validity of the proposed approach was investigated in a real case study. The results confirm its effectiveness and indicate that using MCDM approaches in LM application process offers a consistent and flexible demarche for sustainable manufacturing implementation.

Aouag, Hichem, and Mohyiddine Soltani. 2023. “Improvement of Lean Manufacturing approach based on MCDM techniques for sustainable manufacturing”. International Journal of Manufacturing Research 18 (1). Publisher's Version Abstract

Over the past few decades, Lean Manufacturing (LM) has been the pinnacle of strategies applied for cost and waste reduction. However as the search for competitive advantage and production growth continues, there is a growing consciousness towards environmental preservation. With this consideration in mind this research investigates and applies Value Stream Mapping (VSM) techniques to aid in reducing environmental impacts of manufacturing companies. The research is based on empirical observation within the Chassis weld plant of Company X. The observation focuses on the weld operations and utilizes the cross member line of Auxiliary Cross as a point of study. Using various measuring instruments to capture the emissions emitted by the weld and service equipment, data is collected. The data is thereafter visualised via an Environmental Value Stream Map (EVSM) using a 7-step method. It was found that the total lead-time to build an Auxiliary Cross equates to 16.70 minutes and during this process is emitted. It was additionally found that the UPR x LWR stage of the process indicated both the highest cycle time and carbon emissions emitted and provides a starting point for investigation on emission reduction activity. The EVSM aids in the development of a method that allows quick and comprehensive analysis of energy and material flows. The results of this research are important to practitioners and academics as it provides an extension and further capability of Lean Manufacturing tools. Additionally, the EVSM provides a gateway into realising environmental benefits and sustainable manufacturing through Lean Manufacturing.

Condition monitoring (CM) of industrial processes is essential for reducing downtime and increasing productivity through accurate Condition-Based Maintenance (CBM) scheduling. Indeed, advanced intelligent learning systems for Fault Diagnosis (FD) make it possible to effectively isolate and identify the origins of faults. Proven smart industrial infrastructure technology enables FD to be a fully decentralized distributed computing task. To this end, such distribution among different regions/institutions, often subject to so-called data islanding, is limited to privacy, security risks, and industry competition due to the limitation of legal regulations or conflicts of interest. Therefore, Federated Learning (FL) is considered an efficient process of separating data from multiple participants to collaboratively train an intelligent and reliable FD model. As no comprehensive study has been introduced on this subject to date, as far as we know, such a review-based study is urgently needed. Within this scope, our work is devoted to reviewing recent advances in FL applications for process diagnostics, while FD methods, challenges, and future prospects are given special attention.

Machine learning prognosis for condition monitoring of safety-critical systems, such as aircraft engines, continually faces challenges of data unavailability, complexity, and drift. Consequently, this paper overcomes these challenges by introducing adaptive deep transfer learning methodologies, strengthened with robust feature engineering. Initially, data engineering encompassing: (i) principal component analysis (PCA) dimensionality reduction; (ii) feature selection using correlation analysis; (iii) denoising with empirical Bayesian Cauchy prior wavelets; and (iv) feature scaling is used to obtain the required learning representations. Next, an adaptive deep learning model, namely ProgNet, is trained on a source domain with sufficient degradation trajectories generated from PrognosEase, a run-to-fail data generator for health deterioration analysis. Then, ProgNet is transferred to the target domain of obtained degradation features for fine-tuning. The primary goal is to achieve a higher-level generalization while reducing algorithmic complexity, making experiments reproducible on available commercial computers with quad-core microprocessors. ProgNet is tested on the popular New Commercial Modular Aero-Propulsion System Simulation (N-CMAPSS) dataset describing real flight scenarios. To the extent we can report, this is the first time that all N-CMAPSS subsets have been fully screened in such an experiment. ProgNet evaluations with numerous metrics, including the well-known CMAPSS scoring function, demonstrate promising performance levels, reaching 234.61 for the entire test set. This is approximately four times better than the results obtained with the compared conventional deep learning models.

2022
Aksa, Karima, and Mohieddine Harrag. 2022. “Surveillance Des Zones Critiques Et Des Accès Non Autorisés En Utilisant La Technologie Rfid”. khazzartech الاقتصاد الصناعي 12 (1) : 702-717. Publisher's Version Abstract

La surveillance est la fonction d'observer toutes activités humaine ou environnementales dans le but de superviser, contrôler ou même réagir sur un cas particulier; ce qu’on appelle la supervision ou le monitoring. La technologie de la radio-identification, connue sous l’abréviation RFID (de l’anglais Radio Frequency IDentification), est l’une des technologies utilisées pour récupérer des données à distance de les mémoriser et même de les traiter. C’est une technologie d’actualité et l’une des technologies de l’industrie 4.0 qui s'intègre dans de nombreux domaines de la vie quotidienne notamment la surveillance et le contrôle d’accès. L’objectif de cet article est de montrer comment protéger et surveiller en temps réel des zones industrielles critiques et de tous types d'accès non autorisés de toute personne (employés, visiteurs…) en utilisant la technologie RFID et cela à travers des exemples de simulation à l'aide d’un simulateur dédié aux réseaux de capteurs.

In this study, we investigate a production planning problem in hybrid manufacturing remanufacturing production system. The objective is the determine the best mix between the manufacturing of new products, and the remanufacturing of recovered products, based on economic and environmental considerations. It consists to determine the best manufacturing and remanufacturing plans to minimising the total economic cost (start-up and production costs of new and remanufactured products, storage costs of new and returned products and disposal costs) and the carbon emissions (new products, remanufactured products and disposed products). The hybrid system consists of a set of machines used to produce new products and remanufactured products of different grades (qualities). We assume that remanufacturing is more environmentally efficient, because it allows to reduce the disposal of used products. A multi-objective mathematical model is developed, and a non dominated sorting genetic algorithm (NSGA-II) based approach is proposed. Numerical experience is presented to study the impact of carbon emissions generated by new, remanufactured and disposed products, over a production horizon of several periods.

Aouag, Hichem, Mohyeddine Soltani, and Mohyeddine Soltani. 2022. “Benchmarking framework for sustainable manufacturing based MCDM techniques Benchmarking”. Benchmarking: An International Journal 29 (1). Publisher's Version Abstract

Purpose

The purpose of this paper is to develop a model for sustainable manufacturing by adopting a combined approach using AHP, fuzzy TOPSIS and fuzzy EDAS methods. The proposed model aims to identify and prioritize the sustainable factors and technical requirements that help in improving the sustainability of manufacturing processes.

Design/methodology/approach

The proposed approach integrates both AHP, Fuzzy EDAS and Fuzzy TOPSIS. AHP method is used to generate the weights of the sustainable factors. Fuzzy EDAS and Fuzzy TOPSIS are applied to rank and determine the application priority of a set of improvement approaches. The ranks carried out from each MCDM approach is assessed by computing the spearman's correlation coefficient.

Findings

The results reveal the proposed model is efficient in sustainable factors and the technical requirements prioritizing. In addition, the results carried out from this study indicate the high efficiency of AHP, Fuzzy EDAS and Fuzzy TOPSIS in decision making. Besides, the results indicate that the model provides a useable methodology for managers' staff to select the desirable sustainable factors and technical requirements for sustainable manufacturing.

Research limitations/implications

The main limitation of this paper is that the proposed approach investigates an average number of factors and technical requirements.

Originality/value

This paper investigates an integrated MCDM approach for sustainable factors and technical requirements prioritization. In addition, the presented work pointed out that AHP, Fuzzy EDAS and Fuzzy TOPSIS approach can manipulate several conflict attributes in a sustainable manufacturing context.

Soltani, Mohyiddine, Hichem Aouag, and Mohammed-Djamel Mouss. 2022. “A multiple criteria decision-making improvement strategy in complex manufacturing processes”. International Journal of Operational Research 45 (2). Publisher's Version Abstract

The purpose of this paper is to propose an improvement strategy based on multi-criteria decision making approaches, including fuzzy analytic hierarchy process (AHP), preference ranking organisation method for enrichment evaluation II (PROMETHEE) and višekriterijumsko kompromisno rangiranje (VIKOR) for the objective of simplifying and organising the improvement process in complex manufacturing processes. Firstly, the proposed strategy started with the selection of decision makers', such as company leaders, to determine performance indicators. Then fuzzy AHP is used to quantify the weight of each defined indicators. Finally, the weights carried out from fuzzy AHP approach are used as input in VIKOR and PROMETHE II to rank the operations according to their improvement priority. The results obtained from each outranking method are compared and the best method is determined.

Sahraoui, Khaoula, Samia Aitouche, and Karima Aksa. 2022. “Deep learning in Logistics: systematic review”. International Journal of Logistics Systems and Management. Publisher's Version Abstract

Logistics is one of the main tactics that countries and businesses are improving in order to increase profits. Another prominent theme in today’s logistics is emerging technologies. Today’s developments in logistics and industry are how to profit from collected and accessible data to use it in various processes such as decision making, production plan, logistics delivery programming, and so on, and more specifically deep learning methods. The aim of this paper is to identify the various applications of deep learning in logistics through a systematic literature review. A set of research questions had been identified to be answered by this article.

Zermane, Hanane, and Abbes Drardja. 2022. “Development of an efficient cement production monitoring system based on the improved random forest algorithm”. The International Journal of Advanced Manufacturing Technology 120 : 1853. Publisher's Version Abstract

Strengthening production plants and process control functions contribute to a global improvement of manufacturing systems because of their cross-functional characteristics in the industry. Companies established various innovative and operational strategies; there is increasing competitiveness among them and increasing companies’ value. Machine learning (ML) techniques become an intelligent enticing option to address industrial issues in the current manufacturing sector since the emergence of Industry 4.0 and the extensive integration of paradigms such as big data and high computational power. Implementing a system able to identify faults early to avoid critical situations in the production line and its environment is crucial. Therefore, powerful machine learning algorithms are performed for fault diagnosis, real-time data classification, and predicting the state of functioning of the production line. Random forests proved to be a better classifier with an accuracy of 97%, compared to the SVM model’s accuracy which is 94.18%. However, the K-NN model’s accuracy is about 93.83%. An accuracy of 80.25% is achieved by the logistic regression model. About 83.73% is obtained by the decision tree’s model. The excellent experimental results reached on the random forest model demonstrated the merits of this implementation in the production performance, ensuring predictive maintenance and avoiding wasting energy.

Smart grid is an emerging system providing many benefits in digitizing the traditional power distribution systems. However, the added benefits of digitization and the use of the Internet of Things (IoT) technologies in smart grids also poses threats to its reliable continuous operation due to cyberattacks. Cyber–physical smart grid systems must be secured against increasing security threats and attacks. The most widely studied attacks in smart grids are false data injection attacks (FDIA), denial of service, distributed denial of service (DDoS), and spoofing attacks. These cyberattacks can jeopardize the smooth operation of a smart grid and result in considerable economic losses, equipment damages, and malicious control. This paper focuses on providing an extensive survey on defense mechanisms that can be used to detect these types of cyberattacks and mitigate the associated risks. The future research directions are also provided in the paper for efficient detection and prevention of such cyberattacks.

Kadri, Ouahab, Abderrezak Benyahia, and Adel Abdelhadi. 2022. “Tifinagh Handwriting Character Recognition Using a CNN Provided as a Web Service”. International Journal of Cloud Applications and Computing (IJCAC) 12 (1). Publisher's Version Abstract

Many cloud providers offer very high precision services to exploit Optical Character Recognition (OCR). However, there is no provider offers Tifinagh Optical Character Recognition (OCR) as Web Services. Several works have been proposed to build powerful Tifinagh OCR. Unfortunately, there is no one developed as a Web Service. In this paper, we present a new architecture of Tifinagh Handwriting Recognition as a web service based on a deep learning model via Google Colab. For the implementation of our proposal, we used the new version of the TensorFlow library and a very large database of Tifinagh characters composed of 60,000 images from the Mohammed Vth University in Rabat. Experimental results show that the TensorFlow library based on a Tensor processing unit constitutes a very promising framework for developing fast and very precise Tifinagh OCR web services. The results show that our method based on convolutional neural network outperforms existing methods based on support vector machines and extreme learning machine.

Fuel cell technology has been rapidly developed in the last decade owing to its clean characteristic and high efficiency. Proton exchange membrane fuel cells (PEMFCs) are increasingly used in transportation applications and small stationary applications; however, the cost and the unsatisfying durability of the PEMFC stack have limited their successful commercialization and market penetration. In recent years, thanks to the availability and the quality of emerging data of PEMFCs, digitization is happening to offer possibilities to increase the productivity and the flexibility in fuel cell applications. Therefore, it is crucial to clarify the potential of digitization measures, how and where they can be applied, and their benefits. This paper focuses on the degradation performance of the PEMFC stacks and develops a data-driven intelligent method to predict both the short-term and long-term degradation. The dilated convolutional neural network is for the first time applied for predicting the time-dependent fuel cell performance and is proved to be more efficient than other recurrent networks. To deal with the long-term performance uncertainty, a conditional neural network is proposed. Results have shown that the proposed method can predict not only the degradation tendency, but also contain the degradation behaviour dynamics.

Many machine learning-based methods have been widely applied to Coronary Artery Disease (CAD) and are achieving high accuracy. However, they are black-box methods that are unable to explain the reasons behind the diagnosis. The trade-off between accuracy and interpretability of diagnosis models is important, especially for human disease. This work aims to propose an approach for generating rule-based models for CAD diagnosis. The classification rule generation is modeled as combinatorial optimization problem and it can be solved by means of metaheuristic algorithms. Swarm intelligence algorithms like Equilibrium Optimizer Algorithm (EOA) have demonstrated great performance in solving different optimization problems. Our present study comes up with a Novel Discrete Equilibrium Optimizer Algorithm (NDEOA) for the classification rule generation from training CAD dataset. The proposed NDEOA is a discrete version of EOA, which use a discrete encoding of a particle for representing a classification rule; new discrete operators are also defined for the particle’s position update equation to adapt real operators to discrete space. To evaluate the proposed approach, the real world Z-Alizadeh Sani dataset has been employed. The proposed approach generate a diagnosis model composed of 17 rules, among them, five rules for the class “Normal” and 12 rules for the class “CAD”. In comparison to nine black-box and eight white-box state-of-the-art approaches, the results show that the generated diagnosis model by the proposed approach is more accurate and more interpretable than all white-box models and are competitive to the black-box models. It achieved an overall accuracy, sensitivity and specificity of 93.54%, 80% and 100% respectively; which show that, the proposed approach can be successfully utilized to generate efficient rule-based CAD diagnosis models.

Haouassi, Hichem, et al. 2022. “A new binary grasshopper optimization algorithm for feature selection problem”. Journal of King Saud University - Computer and Information Sciences 34 (2). Publisher's Version Abstract

The grasshopper optimization algorithm is one of the recently population-based optimization techniques inspired by the behaviours of grasshoppers in nature. It is an efficient optimization algorithm and since demonstrates excellent performance in solving continuous problems, but cannot resolve directly binary optimization problems. Many optimization problems have been modelled as binary problems since their decision variables varied in binary space such as feature selection in data classification. The main goal of feature selection is to find a small size subset of feature from a sizeable original set of features that optimize the classification accuracy. In this paper, a new binary variant of the grasshopper optimization algorithm is proposed and used for the feature subset selection problem. This proposed new binary grasshopper optimization algorithm is tested and compared to five well-known swarm-based algorithms used in feature selection problem. All these algorithms are implemented and experimented assessed on twenty data sets with various sizes. The results demonstrated that the proposed approach could outperform the other tested methods.

Pages