Equipe 2

2023

Condition monitoring (CM) of industrial processes is essential for reducing downtime and increasing productivity through accurate Condition-Based Maintenance (CBM) scheduling. Indeed, advanced intelligent learning systems for Fault Diagnosis (FD) make it possible to effectively isolate and identify the origins of faults. Proven smart industrial infrastructure technology enables FD to be a fully decentralized distributed computing task. To this end, such distribution among different regions/institutions, often subject to so-called data islanding, is limited to privacy, security risks, and industry competition due to the limitation of legal regulations or conflicts of interest. Therefore, Federated Learning (FL) is considered an efficient process of separating data from multiple participants to collaboratively train an intelligent and reliable FD model. As no comprehensive study has been introduced on this subject to date, as far as we know, such a review-based study is urgently needed. Within this scope, our work is devoted to reviewing recent advances in FL applications for process diagnostics, while FD methods, challenges, and future prospects are given special attention.

Machine learning prognosis for condition monitoring of safety-critical systems, such as aircraft engines, continually faces challenges of data unavailability, complexity, and drift. Consequently, this paper overcomes these challenges by introducing adaptive deep transfer learning methodologies, strengthened with robust feature engineering. Initially, data engineering encompassing: (i) principal component analysis (PCA) dimensionality reduction; (ii) feature selection using correlation analysis; (iii) denoising with empirical Bayesian Cauchy prior wavelets; and (iv) feature scaling is used to obtain the required learning representations. Next, an adaptive deep learning model, namely ProgNet, is trained on a source domain with sufficient degradation trajectories generated from PrognosEase, a run-to-fail data generator for health deterioration analysis. Then, ProgNet is transferred to the target domain of obtained degradation features for fine-tuning. The primary goal is to achieve a higher-level generalization while reducing algorithmic complexity, making experiments reproducible on available commercial computers with quad-core microprocessors. ProgNet is tested on the popular New Commercial Modular Aero-Propulsion System Simulation (N-CMAPSS) dataset describing real flight scenarios. To the extent we can report, this is the first time that all N-CMAPSS subsets have been fully screened in such an experiment. ProgNet evaluations with numerous metrics, including the well-known CMAPSS scoring function, demonstrate promising performance levels, reaching 234.61 for the entire test set. This is approximately four times better than the results obtained with the compared conventional deep learning models.

2022
Aksa, Karima, and Mohieddine Harrag. 2022. “Surveillance Des Zones Critiques Et Des Accès Non Autorisés En Utilisant La Technologie Rfid”. khazzartech الاقتصاد الصناعي 12 (1) : 702-717. Publisher's Version Abstract

La surveillance est la fonction d'observer toutes activités humaine ou environnementales dans le but de superviser, contrôler ou même réagir sur un cas particulier; ce qu’on appelle la supervision ou le monitoring. La technologie de la radio-identification, connue sous l’abréviation RFID (de l’anglais Radio Frequency IDentification), est l’une des technologies utilisées pour récupérer des données à distance de les mémoriser et même de les traiter. C’est une technologie d’actualité et l’une des technologies de l’industrie 4.0 qui s'intègre dans de nombreux domaines de la vie quotidienne notamment la surveillance et le contrôle d’accès. L’objectif de cet article est de montrer comment protéger et surveiller en temps réel des zones industrielles critiques et de tous types d'accès non autorisés de toute personne (employés, visiteurs…) en utilisant la technologie RFID et cela à travers des exemples de simulation à l'aide d’un simulateur dédié aux réseaux de capteurs.

Sahraoui, Khaoula, Samia Aitouche, and Karima Aksa. 2022. “Deep learning in Logistics: systematic review”. International Journal of Logistics Systems and Management. Publisher's Version Abstract

Logistics is one of the main tactics that countries and businesses are improving in order to increase profits. Another prominent theme in today’s logistics is emerging technologies. Today’s developments in logistics and industry are how to profit from collected and accessible data to use it in various processes such as decision making, production plan, logistics delivery programming, and so on, and more specifically deep learning methods. The aim of this paper is to identify the various applications of deep learning in logistics through a systematic literature review. A set of research questions had been identified to be answered by this article.

Zermane, Hanane, and Abbes Drardja. 2022. “Development of an efficient cement production monitoring system based on the improved random forest algorithm”. The International Journal of Advanced Manufacturing Technology 120 : 1853. Publisher's Version Abstract

Strengthening production plants and process control functions contribute to a global improvement of manufacturing systems because of their cross-functional characteristics in the industry. Companies established various innovative and operational strategies; there is increasing competitiveness among them and increasing companies’ value. Machine learning (ML) techniques become an intelligent enticing option to address industrial issues in the current manufacturing sector since the emergence of Industry 4.0 and the extensive integration of paradigms such as big data and high computational power. Implementing a system able to identify faults early to avoid critical situations in the production line and its environment is crucial. Therefore, powerful machine learning algorithms are performed for fault diagnosis, real-time data classification, and predicting the state of functioning of the production line. Random forests proved to be a better classifier with an accuracy of 97%, compared to the SVM model’s accuracy which is 94.18%. However, the K-NN model’s accuracy is about 93.83%. An accuracy of 80.25% is achieved by the logistic regression model. About 83.73% is obtained by the decision tree’s model. The excellent experimental results reached on the random forest model demonstrated the merits of this implementation in the production performance, ensuring predictive maintenance and avoiding wasting energy.

Smart grid is an emerging system providing many benefits in digitizing the traditional power distribution systems. However, the added benefits of digitization and the use of the Internet of Things (IoT) technologies in smart grids also poses threats to its reliable continuous operation due to cyberattacks. Cyber–physical smart grid systems must be secured against increasing security threats and attacks. The most widely studied attacks in smart grids are false data injection attacks (FDIA), denial of service, distributed denial of service (DDoS), and spoofing attacks. These cyberattacks can jeopardize the smooth operation of a smart grid and result in considerable economic losses, equipment damages, and malicious control. This paper focuses on providing an extensive survey on defense mechanisms that can be used to detect these types of cyberattacks and mitigate the associated risks. The future research directions are also provided in the paper for efficient detection and prevention of such cyberattacks.

Kadri, Ouahab, Abderrezak Benyahia, and Adel Abdelhadi. 2022. “Tifinagh Handwriting Character Recognition Using a CNN Provided as a Web Service”. International Journal of Cloud Applications and Computing (IJCAC) 12 (1). Publisher's Version Abstract

Many cloud providers offer very high precision services to exploit Optical Character Recognition (OCR). However, there is no provider offers Tifinagh Optical Character Recognition (OCR) as Web Services. Several works have been proposed to build powerful Tifinagh OCR. Unfortunately, there is no one developed as a Web Service. In this paper, we present a new architecture of Tifinagh Handwriting Recognition as a web service based on a deep learning model via Google Colab. For the implementation of our proposal, we used the new version of the TensorFlow library and a very large database of Tifinagh characters composed of 60,000 images from the Mohammed Vth University in Rabat. Experimental results show that the TensorFlow library based on a Tensor processing unit constitutes a very promising framework for developing fast and very precise Tifinagh OCR web services. The results show that our method based on convolutional neural network outperforms existing methods based on support vector machines and extreme learning machine.

Federated learning (FL) is a data-privacy-preserving, decentralized process that allows local edge devices of smart infrastructures to train a collaborative model independently while keeping data localized. FL algorithms, encompassing a well-structured average of the training parameters (e.g., the weights and biases resulting from training-based stochastic gradient descent variants), are subject to many challenges, namely expensive communication, systems heterogeneity, statistical heterogeneity, and privacy concerns. In this context, our paper targets the four aforementioned challenges while focusing on reducing communication and computational costs by involving recursive least squares (RLS) training rules. Accordingly, to the best of our knowledge, this is the first time that the RLS algorithm is modified to completely accommodate non-independent and identically distributed data (non-IID) for federated transfer learning (FTL). Furthermore, this paper also introduces a newly generated dataset capable of emulating such real conditions and of making data investigation available on ordinary commercial computers with quad-core microprocessors and less need for higher computing hardware. Applications of FTL-RLS on the generated data under different levels of complexity closely related to different levels of cardinality lead to a variety of conclusions supporting its performance for future uses.

The green conversion of proton exchange membrane fuel cells (PEMFCs) has received particular attention in both stationary and transportation applications. However, the poor durability of PEMFC represents a major problem that hampers its commercial application since dynamic operating conditions, including physical deterioration, have a serious impact on the cell performance. Under these circumstances, prognosis and health management (PHM) plays an important role in prolonging durability and preventing damage propagation via the accurate planning of a condition-based maintenance (CBM) schedule. In this specific topic, health deterioration modeling with deep learning (DL) is the widely studied representation learning tool due to its adaptation ability to rapid changes in data complexity and drift. In this context, the present paper proposes an investigation of further deeper representations by exposing DL models themselves to recurrent expansion with multiple repeats. Such a recurrent expansion of DL (REDL) allows new, more meaningful representations to be explored by repeatedly using generated feature maps and responses to create new robust models. The proposed REDL, which is designed to be an adaptive learning algorithm, is tested on a PEMFC deterioration dataset and compared to its deep learning baseline version under time series analysis. Using multiple numeric and visual metrics, the results support the REDL learning scheme by showing promising performances.

Berghout, Tarek, Mohamed Benbouzid, and S-M Muyeen. 2022. “Machine learning for cybersecurity in smart grids: A comprehensive review-based study on methods, solutions, and prospects”. International Journal of Critical Infrastructure Protection 38. Publisher's Version Abstract

In modern Smart Grids (SGs) ruled by advanced computing and networking technologies, condition monitoring relies on secure cyberphysical connectivity. Due to this connection, a portion of transported data, containing confidential information, must be protected as it is vulnerable and subject to several cyber threats. SG cyberspace adversaries attempt to gain access through networking platforms to commit several criminal activities such as disrupting or malicious manipulation of whole electricity delivery process including generation, distribution, and even customer services such as billing, leading to serious damage, including financial losses and loss of reputation. Therefore, human awareness training and software technologies are necessary precautions to ensure the reliability of data traffic and power transmission. By exploring the available literature, it is undeniable that Machine Learning (ML) has become the latest in the timeline and one of the leading artificial intelligence technologies capable of detecting, identifying, and responding by mitigating adversary attacks in SGs. In this context, the main objective of this paper is to review different ML tools used in recent years for cyberattacks analysis in SGs. It also provides important guidelines on ML model selection as a global solution when building an attack predictive model. A detailed classification is therefore developed with respect to data security triad, i.e., Confidentiality, Integrity, and Availability (CIA) within different types of cyber threats, systems, and datasets. Furthermore, this review highlights the various encountered challenges, drawbacks, and possible solutions as future prospects for ML cybersecurity applications in SGs.

Reliability and security of power distribution and data traffic in smart grid (SG) are very important for industrial control systems (ICS). Indeed, SG cyber-physical connectivity is subject to several vulnerabilities that can damage or disrupt its process immunity via cyberthreats. Today's ICSs are experiencing highly complex data change and dynamism, increasing the complexity of detecting and mitigating cyberattacks. Subsequently, and since Machine Learning (ML) is widely studied in cybersecurity, the objectives of this paper are twofold. First, for algorithmic simplicity, a small-scale ML algorithm that attempts to reduce computational costs is proposed. The algorithm adopts a neural network with an augmented hidden layer (NAHL) to easily and efficiently accomplish the learning procedures. Second, to solve the data complexity problem regarding rapid change and dynamism, a label autoencoding approach is introduced for Embedding Labels in the NAHL (EL-NAHL) architecture to take advantage of labels propagation when separating data scatters. Furthermore, to provide a more realistic analysis by addressing real-world threat scenarios, a dataset of an electric traction substation used in the high-speed rail industry is adopted in this work. Compared to some existing algorithms and other previous works, the achieved results show that the proposed EL-NAHL architecture is effective even under massive dynamically changed and imbalanced data.

Berghout, Tarek, and Mohamed Benbouzid. 2022. “A Systematic Guide for Predicting Remaining Useful Life with Machine Learning”. Electronics 11 (7). Publisher's Version Abstract

Prognosis and health management (PHM) are mandatory tasks for real-time monitoring of damage propagation and aging of operating systems during working conditions. More definitely, PHM simplifies conditional maintenance planning by assessing the actual state of health (SoH) through the level of aging indicators. In fact, an accurate estimate of SoH helps determine remaining useful life (RUL), which is the period between the present and the end of a system’s useful life. Traditional residue-based modeling approaches that rely on the interpretation of appropriate physical laws to simulate operating behaviors fail as the complexity of systems increases. Therefore, machine learning (ML) becomes an unquestionable alternative that employs the behavior of historical data to mimic a large number of SoHs under varying working conditions. In this context, the objective of this paper is twofold. First, to provide an overview of recent developments of RUL prediction while reviewing recent ML tools used for RUL prediction in different critical systems. Second, and more importantly, to ensure that the RUL prediction process from data acquisition to model building and evaluation is straightforward. This paper also provides step-by-step guidelines to help determine the appropriate solution for any specific type of driven data. This guide is followed by a classification of different types of ML tools to cover all the discussed cases. Ultimately, this review-based study uses these guidelines to determine learning model limitations, reconstruction challenges, and future prospects.

Deep learning techniques have recently brought many improvements in the field of neural network training, especially for prognosis and health management. The success of such an intelligent health assessment model depends not only on the availability of labeled historical data but also on the careful samples selection. However, in real operating systems such as induction machines, which generally have a long reliable life, storing the entire operation history, including deterioration (i.e., bearings), will be very expensive and difficult to feed accurately into the training model. Other alternatives sequentially store samples that hold degradation patterns similar to real ones in damage behavior by imposing an accelerated deterioration. Labels lack and differences in distributions caused by the imposed deterioration will ultimately discriminate the training model and limit its knowledge capacity. In an attempt to overcome these drawbacks, a novel sequence-by-sequence deep learning algorithm able to expand the generalization capacity by transferring obtained knowledge from life cycles of similar systems is proposed. The new algorithm aims to determine health status by involving long short-term memory neural network as a primary component of adaptive learning to extract both health stage and health index inferences. Experimental validation performed using the PRONOSTIA induction machine bearing degradation datasets clearly proves the capacity and higher performance of the proposed deep learning knowledge transfer-based prognosis approach.

2021
Zermane, Hanane, Leila-Hayet Mouss, and Sonia Benaicha. 2021. “Automation and fuzzy control of a manufacturing system”. INTERNATIONAL JOURNAL OF FUZZY SYSTEMS and ADVANCED APPLICATIONS. Publisher's Version Abstract

The automation of manufacturing systems is a major obligation to the developments because of exponential industrial equipment, and programming tools, so that growth needs and customer requirements. This automation is achieved in our work through the application programming tools from Siemens, which are PCS 7 (Process Control System) for industrial process control and FuzzyControl++ for fuzzy control. An industrial application is designed, developed and implemented in the cement factory in Ain-Touta (S.CIM.AT) located in the province of Batna, East of Algeria. Especially in the cement mill which gives the final product that is the cement.

Aksa, Karima. 2021. “Principles of Biology in Service of Technology: DNA Computing”. Algerian Journal of Environmental Science and Technology (ALJEST) 7 (20). Publisher's Version Abstract

 As commonly known that living beings cannot survive without natural sources available on earth, technology is no exception; it cannot develop without the inspiring help given by the same nature.

The field of biology has extensively participated in the computing field through the "code of life" DNA (Deoxyribo Nucleic Acid) since it was discovered by Adelman in the past century. This combination gave birth to DNA Computing, which is a very interesting new aspect of biochemistry. It works massively parallel with high energy efficiency, and requiring almost no space.

The field of molecular computing is still new and as the field progresses from concepts to engineering, researchers will address these important issues.

 By the use of encoding data into DNA strands, many NP-complete problems have been solved and many new efficient techniques have been proposed in cryptography field.

The aim of this paper is to give an overview of bio-inspired system and to summarize the great role of DNA molecule in servicing of the technology field.

Aksa, Karima, et al. 2021. “Vers une Nouvelle Révolution Industrielle : Industrie 4.0”. Revue Méditerranéenne des Télécommunications 11 (1). Publisher's Version Abstract

La quatrième révolution industrielle (nommée aussi l’Internet Industriel des Objets) dépend totalement sur la numérisation à travers l’Internet des objets et les réseaux virtuels. Cette révolution qui évolue à un rythme exponentiel, et non plus linéaire, va permettre la création d’usines, d’industries et de processus plus intelligents qui vont ensuite se traduire par une amélioration de la flexibilité, de la productivité et une meilleure utilisation des ressources matérielles et humaines.

Cet article est consacré à introduire cette nouvelle révolution industrielle (industrie4.0), les technologies majeurs participant à son apparition, leur bénéfices attendus ainsi que leurs enjeux à prendre en considération.

Aksa, Karima, et al. 2021. “Developing a Web Platform for the Management of the Predictive Maintenance in Smart Factories”. Wireless Personal Communications 119 : pages1469–1497. Publisher's Version Abstract

Industry 4.0 is a tsunami that will invade the whole world. The real challenge of the future factories requires a high degree of reliability both in machinery and equipment. Thereupon, shifting the rudder towards new trends is an inevitable obligation in this fourth industrial revolution where the maintenance system has radically changed to a new one called predictive maintenance 4.0 (PdM 4.0). This latter is used to avoid predicted problems of machines and increase their lifespan taking into account that if machines have not any predicted problem, they will never be checked. However, in order to get successful prediction of any kind of problems, minimizing energy and resources consumption along with saving costs, this PdM 4.0 needs many new emerging technologies such as the internet of things infrastructure, collection and distribution of data from different smart sensors, analyzing/interpreting a huge amount of data using machine/deep learning…etc. This paper is devoted to present the industry 4.0 and its specific technologies used to ameliorate the existing predictive maintenance strategy. An example is given via a web platform to get a clear idea of how PdM 4.0 is applied in smart factories.

Benbouzid, Mohamed, et al. 2021. “Intelligent Condition Monitoring of Wind Power Systems: State of the Art Review”. Energies 14 (18). Abstract

Modern wind turbines operate in continuously transient conditions, with varying speed, torque, and power based on the stochastic nature of the wind resource. This variability affects not only the operational performance of the wind power system, but can also affect its integrity under service conditions. Condition monitoring continues to play an important role in achieving reliable and economic operation of wind turbines. This paper reviews the current advances in wind turbine condition monitoring, ranging from conventional condition monitoring and signal processing tools to machine-learning-based condition monitoring and usage of big data mining for predictive maintenance. A systematic review is presented of signal-based and data-driven modeling methodologies using intelligent and machine learning approaches, with the view to providing a critical evaluation of the recent developments in this area, and their applications in diagnosis, prognosis, health assessment, and predictive maintenance of wind turbines and farms.

Berghout, Tarek, et al. 2021. “A Semi-Supervised Deep Transfer Learning Approach for Rolling-Element Bearing Remaining Useful Life Prediction”. IEEE Transactions on Instrumentation and Measurement (2022) 37 (2). Publisher's Version Abstract

Deep learning techniques have recently brought many improvements in the field of neural network training, especially for prognosis and health management. The success of such an intelligent health assessment model depends not only on the availability of labeled historical data but also on the careful samples selection. However, in real operating systems such as induction machines, which generally have a long reliable life, storing the entire operation history, including deterioration (i.e., bearings), will be very expensive and difficult to feed accurately into the training model. Other alternatives sequentially store samples that hold degradation patterns similar to real ones in damage behavior by imposing an accelerated deterioration. Labels lack and differences in distributions caused by the imposed deterioration will ultimately discriminate the training model and limit its knowledge capacity. In an attempt to overcome these drawbacks, a novel sequence-by-sequence deep learning algorithm able to expand the generalization capacity by transferring obtained knowledge from life cycles of similar systems is proposed. The new algorithm aims to determine health status by involving long short-term memory neural network as a primary component of adaptive learning to extract both health stage and health index inferences. Experimental validation performed using the PRONOSTIA induction machine bearing degradation datasets clearly proves the capacity and higher performance of the proposed deep learning knowledge transfer-based prognosis approach.

Since bearing deterioration patterns are difficult to collect from real, long lifetime scenarios, data-driven research has been directed towards recovering them by imposing accelerated life tests. Consequently, insufficiently recovered features due to rapid damage propagation seem more likely to lead to poorly generalized learning machines. Knowledge-driven learning comes as a solution by providing prior assumptions from transfer learning. Likewise, the absence of true labels was able to create inconsistency related problems between samples, and teacher-given label behaviors led to more ill-posed predictors. Therefore, in an attempt to overcome the incomplete, unlabeled data drawbacks, a new autoencoder has been designed as an additional source that could correlate inputs and labels by exploiting label information in a completely unsupervised learning scheme. Additionally, its stacked denoising version seems to more robustly be able to recover them for new unseen data. Due to the non-stationary and sequentially driven nature of samples, recovered representations have been fed into a transfer learning, convolutional, long–short-term memory neural network for further meaningful learning representations. The assessment procedures were benchmarked against recent methods under different training datasets. The obtained results led to more efficiency confirming the strength of the new learning path.

Pages