Publications by Type: Journal Article

2022

Federated learning (FL) is a data-privacy-preserving, decentralized process that allows local edge devices of smart infrastructures to train a collaborative model independently while keeping data localized. FL algorithms, encompassing a well-structured average of the training parameters (e.g., the weights and biases resulting from training-based stochastic gradient descent variants), are subject to many challenges, namely expensive communication, systems heterogeneity, statistical heterogeneity, and privacy concerns. In this context, our paper targets the four aforementioned challenges while focusing on reducing communication and computational costs by involving recursive least squares (RLS) training rules. Accordingly, to the best of our knowledge, this is the first time that the RLS algorithm is modified to completely accommodate non-independent and identically distributed data (non-IID) for federated transfer learning (FTL). Furthermore, this paper also introduces a newly generated dataset capable of emulating such real conditions and of making data investigation available on ordinary commercial computers with quad-core microprocessors and less need for higher computing hardware. Applications of FTL-RLS on the generated data under different levels of complexity closely related to different levels of cardinality lead to a variety of conclusions supporting its performance for future uses.

The green conversion of proton exchange membrane fuel cells (PEMFCs) has received particular attention in both stationary and transportation applications. However, the poor durability of PEMFC represents a major problem that hampers its commercial application since dynamic operating conditions, including physical deterioration, have a serious impact on the cell performance. Under these circumstances, prognosis and health management (PHM) plays an important role in prolonging durability and preventing damage propagation via the accurate planning of a condition-based maintenance (CBM) schedule. In this specific topic, health deterioration modeling with deep learning (DL) is the widely studied representation learning tool due to its adaptation ability to rapid changes in data complexity and drift. In this context, the present paper proposes an investigation of further deeper representations by exposing DL models themselves to recurrent expansion with multiple repeats. Such a recurrent expansion of DL (REDL) allows new, more meaningful representations to be explored by repeatedly using generated feature maps and responses to create new robust models. The proposed REDL, which is designed to be an adaptive learning algorithm, is tested on a PEMFC deterioration dataset and compared to its deep learning baseline version under time series analysis. Using multiple numeric and visual metrics, the results support the REDL learning scheme by showing promising performances.

Berghout, Tarek, Mohamed Benbouzid, and S-M Muyeen. 2022. “Machine learning for cybersecurity in smart grids: A comprehensive review-based study on methods, solutions, and prospects”. International Journal of Critical Infrastructure Protection 38. Publisher's Version Abstract

In modern Smart Grids (SGs) ruled by advanced computing and networking technologies, condition monitoring relies on secure cyberphysical connectivity. Due to this connection, a portion of transported data, containing confidential information, must be protected as it is vulnerable and subject to several cyber threats. SG cyberspace adversaries attempt to gain access through networking platforms to commit several criminal activities such as disrupting or malicious manipulation of whole electricity delivery process including generation, distribution, and even customer services such as billing, leading to serious damage, including financial losses and loss of reputation. Therefore, human awareness training and software technologies are necessary precautions to ensure the reliability of data traffic and power transmission. By exploring the available literature, it is undeniable that Machine Learning (ML) has become the latest in the timeline and one of the leading artificial intelligence technologies capable of detecting, identifying, and responding by mitigating adversary attacks in SGs. In this context, the main objective of this paper is to review different ML tools used in recent years for cyberattacks analysis in SGs. It also provides important guidelines on ML model selection as a global solution when building an attack predictive model. A detailed classification is therefore developed with respect to data security triad, i.e., Confidentiality, Integrity, and Availability (CIA) within different types of cyber threats, systems, and datasets. Furthermore, this review highlights the various encountered challenges, drawbacks, and possible solutions as future prospects for ML cybersecurity applications in SGs.

Reliability and security of power distribution and data traffic in smart grid (SG) are very important for industrial control systems (ICS). Indeed, SG cyber-physical connectivity is subject to several vulnerabilities that can damage or disrupt its process immunity via cyberthreats. Today's ICSs are experiencing highly complex data change and dynamism, increasing the complexity of detecting and mitigating cyberattacks. Subsequently, and since Machine Learning (ML) is widely studied in cybersecurity, the objectives of this paper are twofold. First, for algorithmic simplicity, a small-scale ML algorithm that attempts to reduce computational costs is proposed. The algorithm adopts a neural network with an augmented hidden layer (NAHL) to easily and efficiently accomplish the learning procedures. Second, to solve the data complexity problem regarding rapid change and dynamism, a label autoencoding approach is introduced for Embedding Labels in the NAHL (EL-NAHL) architecture to take advantage of labels propagation when separating data scatters. Furthermore, to provide a more realistic analysis by addressing real-world threat scenarios, a dataset of an electric traction substation used in the high-speed rail industry is adopted in this work. Compared to some existing algorithms and other previous works, the achieved results show that the proposed EL-NAHL architecture is effective even under massive dynamically changed and imbalanced data.

Berghout, Tarek, and Mohamed Benbouzid. 2022. “A Systematic Guide for Predicting Remaining Useful Life with Machine Learning”. Electronics 11 (7). Publisher's Version Abstract

Prognosis and health management (PHM) are mandatory tasks for real-time monitoring of damage propagation and aging of operating systems during working conditions. More definitely, PHM simplifies conditional maintenance planning by assessing the actual state of health (SoH) through the level of aging indicators. In fact, an accurate estimate of SoH helps determine remaining useful life (RUL), which is the period between the present and the end of a system’s useful life. Traditional residue-based modeling approaches that rely on the interpretation of appropriate physical laws to simulate operating behaviors fail as the complexity of systems increases. Therefore, machine learning (ML) becomes an unquestionable alternative that employs the behavior of historical data to mimic a large number of SoHs under varying working conditions. In this context, the objective of this paper is twofold. First, to provide an overview of recent developments of RUL prediction while reviewing recent ML tools used for RUL prediction in different critical systems. Second, and more importantly, to ensure that the RUL prediction process from data acquisition to model building and evaluation is straightforward. This paper also provides step-by-step guidelines to help determine the appropriate solution for any specific type of driven data. This guide is followed by a classification of different types of ML tools to cover all the discussed cases. Ultimately, this review-based study uses these guidelines to determine learning model limitations, reconstruction challenges, and future prospects.

Benaggoune, Khaled, et al. 2022. “A deep learning pipeline for breast cancer ki-67 proliferation index scoring”. Image and Video Processing (eess.IV). Publisher's Version Abstract

The Ki-67 proliferation index is an essential biomarker that helps pathologists to diagnose and select appropriate treatments. However, automatic evaluation of Ki-67 is difficult due to nuclei overlapping and complex variations in their properties. This paper proposes an integrated pipeline for accurate automatic counting of Ki-67, where the impact of nuclei separation techniques is highlighted. First, semantic segmentation is performed by combining the Squeez and Excitation Resnet and Unet algorithms to extract nuclei from the background. The extracted nuclei are then divided into overlapped and non-overlapped regions based on eight geometric and statistical features. A marker-based Watershed algorithm is subsequently proposed and applied only to the overlapped regions to separate nuclei. Finally, deep features are extracted from each nucleus patch using Resnet18 and classified into positive or negative by a random forest classifier. The proposed pipeline's performance is validated on a dataset from the Department of Pathology at Hôpital Nord Franche-Comté hospital.

This paper proposes a new approach for remaining useful life prediction that combines a bond graph, the Gaussian Mixture Model and similarity techniques to allow the use of both physical knowledge and the data available. The proposed method is based on the identification of relevant variables that carry information on degradation. To this end, the causal properties of the bond graph (BG) are first used to identify the relevant sensors through the fault observability. Then, a second stage of analysis based on statistical metrics is performed to reduce the number of sensors to only the ones carrying useful information for failure prognosis, thus, optimizing the data to be used in the prognosis phase. To generate data in the different system state, a simulator based on the developed BG is used. A Gaussian Mixture Model is then applied on the generated data for fault diagnosis and clustering. The Remaining Useful Life is estimated using a similarity technique. An application on a mechatronic system is considered for highlighting the effectiveness of the proposed approach.

In the last recent years, a large community of researchers and industrial practitioners has been attracted by combining different prognostics models as such strategy results in boosted accuracy and robust performance compared to the exploitation of single models. The present work is devoted to the investigation of three new fusion schemes for the remaining useful life forecast. These integrated frameworks are based on aggregating a set of Gaussian process regression models thanks to the Induced Ordered Weighted Averaging Operators. The combination procedure is built upon three proposed analytical weighting schemes including exponential, logarithmic and inverse functions. In addition, the uncertainty aspect is supported in this work, where the proposed functions are used to weighted average the variances released from competitive Gaussian process regression models. The training data are transformed into gradient values, which are adopted as new training data instead of the original observations. A lithium-ion battery data set is used as a benchmark to prove the efficiency of the proposed weighting schemes. The obtained results are promising and may provide some guidelines for future advances in performing robust fusion options to accurately estimate the remaining useful life.

Deep learning techniques have recently brought many improvements in the field of neural network training, especially for prognosis and health management. The success of such an intelligent health assessment model depends not only on the availability of labeled historical data but also on the careful samples selection. However, in real operating systems such as induction machines, which generally have a long reliable life, storing the entire operation history, including deterioration (i.e., bearings), will be very expensive and difficult to feed accurately into the training model. Other alternatives sequentially store samples that hold degradation patterns similar to real ones in damage behavior by imposing an accelerated deterioration. Labels lack and differences in distributions caused by the imposed deterioration will ultimately discriminate the training model and limit its knowledge capacity. In an attempt to overcome these drawbacks, a novel sequence-by-sequence deep learning algorithm able to expand the generalization capacity by transferring obtained knowledge from life cycles of similar systems is proposed. The new algorithm aims to determine health status by involving long short-term memory neural network as a primary component of adaptive learning to extract both health stage and health index inferences. Experimental validation performed using the PRONOSTIA induction machine bearing degradation datasets clearly proves the capacity and higher performance of the proposed deep learning knowledge transfer-based prognosis approach.

2021
Sonia, Benaicha, et al. 2021. “Development of an Industrial Application with Neuro-Fuzzy Systems”. INTERNATIONAL JOURNAL OF FUZZY SYSTEMS and ADVANCED APPLICATIONS 8. Publisher's Version Abstract

In this paper, our objective is dedicated to the detection of a deterioration in the estimated operating time by giving preventive action before a failure, and the classification of breakdowns after failure by giving the action of the diagnosis and / or maintenance. For this reason, we propose a new Neuro-fuzzy assistance prognosis system based on pattern recognition called "NFPROG" (Neuro Fuzzy Prognosis). NFPROG is an interactive simulation software, developed within the Laboratory of Automation and Production (LAP) -University of Batna, Algeria. It is a four-layer fuzzy preceptor whose architecture is based on Elman neural networks. This system is applied to the cement manufacturing process (cooking process) to the cement manufacturing company of Ain-Touta-Batna, Algeria. And since this company has an installation and configuration S7-400 of Siemens PLC PCS7was chosen as a programming language platform for our system.

Aksa, Karima. 2021. “Principles of Biology in Service of Technology: DNA Computing”. Algerian Journal of Environmental Science and Technology (ALJEST) 7 (20). Publisher's Version Abstract

 As commonly known that living beings cannot survive without natural sources available on earth, technology is no exception; it cannot develop without the inspiring help given by the same nature.

The field of biology has extensively participated in the computing field through the "code of life" DNA (Deoxyribo Nucleic Acid) since it was discovered by Adelman in the past century. This combination gave birth to DNA Computing, which is a very interesting new aspect of biochemistry. It works massively parallel with high energy efficiency, and requiring almost no space.

The field of molecular computing is still new and as the field progresses from concepts to engineering, researchers will address these important issues.

 By the use of encoding data into DNA strands, many NP-complete problems have been solved and many new efficient techniques have been proposed in cryptography field.

The aim of this paper is to give an overview of bio-inspired system and to summarize the great role of DNA molecule in servicing of the technology field.

Bensakhria, Mohamed, and Samir Abdelhamid. 2021. “A Hybrid Methodology based on heuristic algorithms for a production distribution system with routing decisions”. . BizInfo (Blace) Journal of Economics, Management and Informatics 12 (2) : 1-22. Publisher's Version Abstract

In this paper, we address the integration of a two-level supply chain with multiple items. This two-level production-distribution system features a capacitated production facility supplying several retailers located in the same region. If production does occur, this process incurs a fixed setup cost and unit production costs. Besides, deliveries are made from the plant to the retailers by a limited number of capacitated vehicles, routing costs incurred. This work aims to implement a minimization solution that reduces the total costs in both the production facility and retailers. The methodology adopted based on a hybrid heuristic, greedy and genetic algorithm uses strong formulation to provide a suitable solution of a guaranteed quality that is as good or better than those provided by the MIP optimizer. The results demonstrate that the proposed heuristics are effective and performs impressively in terms of computational efficiency and solution quality.

Benfriha, Abdennour -Ilyas, et al. 2021. “Dynamic planning design of three level distribution network with horizontal and vertical exchange”. Inventory management in distribution networks remains a challenging task due to the demand nature and the limited storage capacity. In this work, we study a three-level, a multi-product and a multi-period distribution network consisting of a central ware. Abstract

 Inventory management in distribution networks remains a challenging task due to the demand nature and the limited storage capacity. In this work, we study a three-level, a multi-product and a multi-period distribution network consisting of a central warehouse, three distribution centres and six wholesalers. Each of them faces a random demand. In order to optimise the inventory management in the distribution network, we first propose to make a horizontal cooperation between actors of the same level in the form of product exchange; then we propose a second approach based on vertical-horizontal cooperation. Both approaches are modelled as a MIP model and solved using the CPLEX solver. The objective of this study is to analyse the performance in terms of costs, quantities in stock and customer satisfaction.

Aksa, Karima, et al. 2021. “Vers une Nouvelle Révolution Industrielle : Industrie 4.0”. Revue Méditerranéenne des Télécommunications 11 (1). Publisher's Version Abstract

La quatrième révolution industrielle (nommée aussi l’Internet Industriel des Objets) dépend totalement sur la numérisation à travers l’Internet des objets et les réseaux virtuels. Cette révolution qui évolue à un rythme exponentiel, et non plus linéaire, va permettre la création d’usines, d’industries et de processus plus intelligents qui vont ensuite se traduire par une amélioration de la flexibilité, de la productivité et une meilleure utilisation des ressources matérielles et humaines.

Cet article est consacré à introduire cette nouvelle révolution industrielle (industrie4.0), les technologies majeurs participant à son apparition, leur bénéfices attendus ainsi que leurs enjeux à prendre en considération.

Aksa, Karima, et al. 2021. “Developing a Web Platform for the Management of the Predictive Maintenance in Smart Factories”. Wireless Personal Communications 119 : pages1469–1497. Publisher's Version Abstract

Industry 4.0 is a tsunami that will invade the whole world. The real challenge of the future factories requires a high degree of reliability both in machinery and equipment. Thereupon, shifting the rudder towards new trends is an inevitable obligation in this fourth industrial revolution where the maintenance system has radically changed to a new one called predictive maintenance 4.0 (PdM 4.0). This latter is used to avoid predicted problems of machines and increase their lifespan taking into account that if machines have not any predicted problem, they will never be checked. However, in order to get successful prediction of any kind of problems, minimizing energy and resources consumption along with saving costs, this PdM 4.0 needs many new emerging technologies such as the internet of things infrastructure, collection and distribution of data from different smart sensors, analyzing/interpreting a huge amount of data using machine/deep learning…etc. This paper is devoted to present the industry 4.0 and its specific technologies used to ameliorate the existing predictive maintenance strategy. An example is given via a web platform to get a clear idea of how PdM 4.0 is applied in smart factories.

Benbouzid, Mohamed, et al. 2021. “Intelligent Condition Monitoring of Wind Power Systems: State of the Art Review”. Energies 14 (18). Abstract

Modern wind turbines operate in continuously transient conditions, with varying speed, torque, and power based on the stochastic nature of the wind resource. This variability affects not only the operational performance of the wind power system, but can also affect its integrity under service conditions. Condition monitoring continues to play an important role in achieving reliable and economic operation of wind turbines. This paper reviews the current advances in wind turbine condition monitoring, ranging from conventional condition monitoring and signal processing tools to machine-learning-based condition monitoring and usage of big data mining for predictive maintenance. A systematic review is presented of signal-based and data-driven modeling methodologies using intelligent and machine learning approaches, with the view to providing a critical evaluation of the recent developments in this area, and their applications in diagnosis, prognosis, health assessment, and predictive maintenance of wind turbines and farms.

Berghout, Tarek, et al. 2021. “A Semi-Supervised Deep Transfer Learning Approach for Rolling-Element Bearing Remaining Useful Life Prediction”. IEEE Transactions on Instrumentation and Measurement (2022) 37 (2). Publisher's Version Abstract

Deep learning techniques have recently brought many improvements in the field of neural network training, especially for prognosis and health management. The success of such an intelligent health assessment model depends not only on the availability of labeled historical data but also on the careful samples selection. However, in real operating systems such as induction machines, which generally have a long reliable life, storing the entire operation history, including deterioration (i.e., bearings), will be very expensive and difficult to feed accurately into the training model. Other alternatives sequentially store samples that hold degradation patterns similar to real ones in damage behavior by imposing an accelerated deterioration. Labels lack and differences in distributions caused by the imposed deterioration will ultimately discriminate the training model and limit its knowledge capacity. In an attempt to overcome these drawbacks, a novel sequence-by-sequence deep learning algorithm able to expand the generalization capacity by transferring obtained knowledge from life cycles of similar systems is proposed. The new algorithm aims to determine health status by involving long short-term memory neural network as a primary component of adaptive learning to extract both health stage and health index inferences. Experimental validation performed using the PRONOSTIA induction machine bearing degradation datasets clearly proves the capacity and higher performance of the proposed deep learning knowledge transfer-based prognosis approach.

Meraghni, Safa, et al. 2021. “Towards Digital Twins Driven Breast Cancer Detection”. Lecture Notes in Networks and Systems 285 : 87–99. Publisher's Version Abstract

Digital twins have transformed the industrial world by changing the development phase of a product or the use of equipment. With the digital twin, the object’s evolution data allows us to anticipate and optimize its performance. Healthcare is in the midst of a digital transition towards personalized, predictive, preventive, and participatory medicine. The digital twin is one of the key tools of this change. In this work, DT is proposed for the diagnosis of breast cancer based on breast skin temperature. Research has focused on thermography as a non-invasive scanning solution for breast cancer diagnosis. However, body temperature is influenced by many factors, such as breast anatomy, physiological functions, blood pressure, etc. The proposed DT updates the bio-heat model’s temperature using the data collected by temperature sensors and complementary data from smart devices. Consequently, the proposed DT is personalized using the collected data to reflect the person’s behavior with whom it is connected.

Since bearing deterioration patterns are difficult to collect from real, long lifetime scenarios, data-driven research has been directed towards recovering them by imposing accelerated life tests. Consequently, insufficiently recovered features due to rapid damage propagation seem more likely to lead to poorly generalized learning machines. Knowledge-driven learning comes as a solution by providing prior assumptions from transfer learning. Likewise, the absence of true labels was able to create inconsistency related problems between samples, and teacher-given label behaviors led to more ill-posed predictors. Therefore, in an attempt to overcome the incomplete, unlabeled data drawbacks, a new autoencoder has been designed as an additional source that could correlate inputs and labels by exploiting label information in a completely unsupervised learning scheme. Additionally, its stacked denoising version seems to more robustly be able to recover them for new unseen data. Due to the non-stationary and sequentially driven nature of samples, recovered representations have been fed into a transfer learning, convolutional, long–short-term memory neural network for further meaningful learning representations. The assessment procedures were benchmarked against recent methods under different training datasets. The obtained results led to more efficiency confirming the strength of the new learning path.
Seddik, Mohamed-Takieddine, et al. 2021. “Detection of Flooding Attack on OBS Network Using Ant Colony Optimization and Machine Learning”. Computación y Sistemas 25 (2). Publisher's Version Abstract

Optical burst switching (OBS) has become one of the best and widely used optical networking techniques. It offers more efficient bandwidth usage than optical packet switching (OPS) and optical circuit switching (OCS).However, it undergoes more attacks than other techniques and the Classical security approach cannot solve its security problem. Therefore, a new security approach based on machine learning and cloud computing is proposed in this article. We used the Google Colab platform to apply Support Vector Machine (SVM) and Extreme Learning Machine (ELM)to Burst Header Packet (BHP) flooding attack on Optical Burst Switching (OBS) Network Data Set.

Pages