Data-driven prognostics and health management (PHM) is key to increasing the productivity of industrial processes through accurate maintenance planning. The increasing complexity of the systems themselves, in addition to cyber-physical connectivity, has brought too many challenges for the discipline. As a result, data complexity challenges have been pushed back to include more decentralized learning challenges. In this context, this perspective paper describes these challenges and provides future directions based on a relevant state-of-the-art review.
Lithium-ion (Li-ion) batteries play an important role in providing necessary energy when acting as a main or backup source of electricity. Indeed, the unavailability of battery aging discharge data in most real-world applications makes the State of Health (SoH) assessment very challenging. Alternatively, accelerated aging is therefore adopted to emulate the degradation process and to achieve an SoH estimate. However, accelerated aging generates limited deterioration patterns suffering from a higher level of complexity due to the non-linearity and non-stationarity imposed by harsh conditions. In this context, this paper aims to provide a predictive model capable of solving incomplete data problems by providing two main solutions for each of the problems of complexity and missing patterns, respectively. First, to overcome the problem of lack of patterns, a robust collaborative feature extractor (RCFE) is designed by collaborating between a set of improved restricted Boltzmann machines (I-RBMs) to be able to share learning knowledge among different locally trained I-RBMs to create a more generalized global extraction model. Second, a set of RCFEs is then evolved through a neural network with an augmented hidden layer (NAHL) to enhance the predictive ability by further exploring representation learning to overcome pattern complexity issues. The designed RCFE-NAHL is trained to predict SoH using constant current (CC) discharge characteristics by implying multiple characteristics recorded through the constant voltage (CV) charging process as indicators of health. The proposed SoH prediction approach performances are evaluated on a set of battery life cycles from the well-known NASA database. In this context, the achieved results clearly highlight the higher accuracy and robustness of the proposed learning model.
Advanced technologies, such as the Internet of Things (IoT) and Artificial Intelligence (AI), underpin many of the innovations in Industry 4.0. However, the interconnectivity and open nature of such systems in smart industrial facilities can also be targeted and abused by malicious actors, which reinforces the importance of cyber security. In this paper, we present a secure, decentralized, and Differentially Private (DP) Federated Learning (FL)-based IDS (2DF-IDS), for securing smart industrial facilities. The proposed 2DF-IDS comprises three building blocks, namely: a key exchange protocol (for securing the communicated weights among all peers in the system), a differentially private gradient exchange scheme (achieve improved privacy of the FL approach), and a decentralized FL approach (that mitigates the single point of failure/attack risk associated with the aggregation server in the conventional FL approach). We evaluate our proposed system through detailed experiments using a real-world IoT/IIoT dataset, and the results show that the proposed 2DF-IDS system can identify different types of cyber attacks in an Industrial IoT system with high performance. For instance, the proposed system achieves comparable performance (94.37%) with the centralized learning approach (94.37%) and outperforms the FL-based approach (93.91%) in terms of accuracy. The proposed system is also shown to improve the overall performance by 12%, 13%, and 9% in terms of F1-score, recall, and precision, respectively, under strict privacy settings when compared to other competing FL-based IDS solutions.
Starting from a worrying observation, that companies have difficulties controlling the anomalies of their manufacturing processes, in order to have a better control over them, we have realized a case study on the practical data of the Fertial Complex to analyze the main parameters of the ammonia neutralization by nitric acid process. This article proposes a precise diagnostic of this process to detect dysfunction problems affecting the final product. We start with a general diagnosis of the process using the SPC method, this approach is considered an excellent way to monitor and improve the product quality and provides very useful observations that allowed us to detect the parameters that suffer from problems affecting the quality. After the discovery of the parameters incapable to produce the quality required by the standards, we applies two machine learning technologies dedicated to the type of data of these parameters for detected the anomaly, the first technique called The kernel connectivity-based outlier factor (COF) algorithm consists in recording for each object the degree of being an outlier, the second technique called the Isolation Forest, its principle is to establish a forest to facilitate the calculation and description. The results obtained were compared in order to choose which is the best algorithm to monitor and detect the problems of these parameters, we find that the COF method is more efficient than the isolation forest which leads us to rely on this technology in this kind of process in order to avoid passing a bad quality to the customer in future.
An effective Food Traceability System (FTS) in a Food Supply Chain (FSC) should adequately provide all necessary information to the consumer(s), meet the requirements of the relevant agencies, and improve food safety as well as consumer confidence. New information and communication technologies are rapidly advancing, especially after the emergence of the Internet of Things (IoT). Consequently, new food traceability systems have become mainly based on IoT. Many studies have been conducted on food traceability. They mainly focused on the practical implementation and theoretical concepts. Accordingly, various definitions, technologies, and principles have been proposed. The “traceability” concept has been defined in several ways and each new definition has tried to generalize its previous ones. Nevertheless, no standard definition has been reached. Furthermore, the architecture of IoT-based food traceability systems has not yet been standardized. Similarly, used technologies in this field have not been yet well classified. This article presents an analysis of the existing definitions of food traceability, and thus proposes a new one that aims to be simpler, general, and encompassing than the previous ones. We also propose, through this article, a new architecture for IoT-based food traceability systems as well as a new classification of technologies used in this context. We do not miss discussing the applications of different technologies and future trends in the field of IoT-based food traceability systems. Mainly, an FTS can make use of three types of technologies: Identification and Monitoring Technologies (IMT), Communication Technologies (CT), and Data Management Technologies (DMT). Improving a food traceability system requires the use of the best new technologies. There is a variety of promising technologies today to enhance FTS, such as fifth-generation (5G) mobile communication systems and distributed ledger technology (DLT).
The growth of manufacturing industries and the huge competitive environment forced manufacturing organizations to develop advanced improvement strategies and enhance their sustainability performance. The integration of sustainable Manufacturing in industrial operations leads to enhanced process performances through the reduction of wastes, cost, and environmental impacts and satisfies ergonomic conditions. For this reason, various firms have adopted sustainable manufacturing concepts to enhance their performances and hold a prestigious competitive position. The purpose of this research is to develop an integrated Pythagorean Fuzzy MCDM model to enhance the application process of the conventional Lean Manufacturing approach (LM). Firstly, an extended Value Steam Mapping is proposed to assess the sustainability of the manufacturing process and identify the causes of waste from a sustainability viewpoint. Secondly, Pythagorean Fuzzy Decision-Making Trial And Evaluation Laboratory (PF-DEMATEL) is employed to analyze the interrelationship among the identified. Thirdly, Pythagorean Fuzzy Technique for Order Preference by Similarity to Ideal Solution (PF-TOPSIS) is introduced to prioritize a set of solutions in order to overcome the investigated causes and improve the durability of the manufacturing operations. Finally, sensitivity analysis is conduced to assess the effectiveness of the obtained results. The proposed method has several attractive features. It can address the drawbacks of the conventional LM and enhance its analysis and improvement tasks. However, the proposed approach offers an advanced application process for Lean Manufacturing in a sustainability context. Additionally, the suggested strategy facilitates the leaders to assess the current state of the manufacturing processes and select the appropriate solutions for successful sustainability implementation. The validity of the proposed approach was investigated in a real case study. The results confirm its effectiveness and indicate that using MCDM approaches in LM application process offers a consistent and flexible demarche for sustainable manufacturing implementation.
Over the past few decades, Lean Manufacturing (LM) has been the pinnacle of strategies applied for cost and waste reduction. However as the search for competitive advantage and production growth continues, there is a growing consciousness towards environmental preservation. With this consideration in mind this research investigates and applies Value Stream Mapping (VSM) techniques to aid in reducing environmental impacts of manufacturing companies. The research is based on empirical observation within the Chassis weld plant of Company X. The observation focuses on the weld operations and utilizes the cross member line of Auxiliary Cross as a point of study. Using various measuring instruments to capture the emissions emitted by the weld and service equipment, data is collected. The data is thereafter visualised via an Environmental Value Stream Map (EVSM) using a 7-step method. It was found that the total lead-time to build an Auxiliary Cross equates to 16.70 minutes and during this process is emitted. It was additionally found that the UPR x LWR stage of the process indicated both the highest cycle time and carbon emissions emitted and provides a starting point for investigation on emission reduction activity. The EVSM aids in the development of a method that allows quick and comprehensive analysis of energy and material flows. The results of this research are important to practitioners and academics as it provides an extension and further capability of Lean Manufacturing tools. Additionally, the EVSM provides a gateway into realising environmental benefits and sustainable manufacturing through Lean Manufacturing.
Condition monitoring (CM) of industrial processes is essential for reducing downtime and increasing productivity through accurate Condition-Based Maintenance (CBM) scheduling. Indeed, advanced intelligent learning systems for Fault Diagnosis (FD) make it possible to effectively isolate and identify the origins of faults. Proven smart industrial infrastructure technology enables FD to be a fully decentralized distributed computing task. To this end, such distribution among different regions/institutions, often subject to so-called data islanding, is limited to privacy, security risks, and industry competition due to the limitation of legal regulations or conflicts of interest. Therefore, Federated Learning (FL) is considered an efficient process of separating data from multiple participants to collaboratively train an intelligent and reliable FD model. As no comprehensive study has been introduced on this subject to date, as far as we know, such a review-based study is urgently needed. Within this scope, our work is devoted to reviewing recent advances in FL applications for process diagnostics, while FD methods, challenges, and future prospects are given special attention.
Machine learning prognosis for condition monitoring of safety-critical systems, such as aircraft engines, continually faces challenges of data unavailability, complexity, and drift. Consequently, this paper overcomes these challenges by introducing adaptive deep transfer learning methodologies, strengthened with robust feature engineering. Initially, data engineering encompassing: (i) principal component analysis (PCA) dimensionality reduction; (ii) feature selection using correlation analysis; (iii) denoising with empirical Bayesian Cauchy prior wavelets; and (iv) feature scaling is used to obtain the required learning representations. Next, an adaptive deep learning model, namely ProgNet, is trained on a source domain with sufficient degradation trajectories generated from PrognosEase, a run-to-fail data generator for health deterioration analysis. Then, ProgNet is transferred to the target domain of obtained degradation features for fine-tuning. The primary goal is to achieve a higher-level generalization while reducing algorithmic complexity, making experiments reproducible on available commercial computers with quad-core microprocessors. ProgNet is tested on the popular New Commercial Modular Aero-Propulsion System Simulation (N-CMAPSS) dataset describing real flight scenarios. To the extent we can report, this is the first time that all N-CMAPSS subsets have been fully screened in such an experiment. ProgNet evaluations with numerous metrics, including the well-known CMAPSS scoring function, demonstrate promising performance levels, reaching 234.61 for the entire test set. This is approximately four times better than the results obtained with the compared conventional deep learning models.