In this paper, our objective is dedicated to the detection of a deterioration in the estimated operating time by giving preventive action before a failure, and the classification of breakdowns after failure by giving the action of the diagnosis and / or maintenance. For this reason, we propose a new Neuro-fuzzy assistance prognosis system based on pattern recognition called "NFPROG" (Neuro Fuzzy Prognosis). NFPROG is an interactive simulation software, developed within the Laboratory of Automation and Production (LAP) -University of Batna, Algeria. It is a four-layer fuzzy preceptor whose architecture is based on Elman neural networks. This system is applied to the cement manufacturing process (cooking process) to the cement manufacturing company of Ain-Touta-Batna, Algeria. And since this company has an installation and configuration S7-400 of Siemens PLC PCS7was chosen as a programming language platform for our system.
As commonly known that living beings cannot survive without natural sources available on earth, technology is no exception; it cannot develop without the inspiring help given by the same nature.
The field of biology has extensively participated in the computing field through the "code of life" DNA (Deoxyribo Nucleic Acid) since it was discovered by Adelman in the past century. This combination gave birth to DNA Computing, which is a very interesting new aspect of biochemistry. It works massively parallel with high energy efficiency, and requiring almost no space.
The field of molecular computing is still new and as the field progresses from concepts to engineering, researchers will address these important issues.
By the use of encoding data into DNA strands, many NP-complete problems have been solved and many new efficient techniques have been proposed in cryptography field.
The aim of this paper is to give an overview of bio-inspired system and to summarize the great role of DNA molecule in servicing of the technology field.
In this paper, we address the integration of a two-level supply chain with multiple items. This two-level production-distribution system features a capacitated production facility supplying several retailers located in the same region. If production does occur, this process incurs a fixed setup cost and unit production costs. Besides, deliveries are made from the plant to the retailers by a limited number of capacitated vehicles, routing costs incurred. This work aims to implement a minimization solution that reduces the total costs in both the production facility and retailers. The methodology adopted based on a hybrid heuristic, greedy and genetic algorithm uses strong formulation to provide a suitable solution of a guaranteed quality that is as good or better than those provided by the MIP optimizer. The results demonstrate that the proposed heuristics are effective and performs impressively in terms of computational efficiency and solution quality.
Inventory management in distribution networks remains a challenging task due to the demand nature and the limited storage capacity. In this work, we study a three-level, a multi-product and a multi-period distribution network consisting of a central warehouse, three distribution centres and six wholesalers. Each of them faces a random demand. In order to optimise the inventory management in the distribution network, we first propose to make a horizontal cooperation between actors of the same level in the form of product exchange; then we propose a second approach based on vertical-horizontal cooperation. Both approaches are modelled as a MIP model and solved using the CPLEX solver. The objective of this study is to analyse the performance in terms of costs, quantities in stock and customer satisfaction.
La quatrième révolution industrielle (nommée aussi l’Internet Industriel des Objets) dépend totalement sur la numérisation à travers l’Internet des objets et les réseaux virtuels. Cette révolution qui évolue à un rythme exponentiel, et non plus linéaire, va permettre la création d’usines, d’industries et de processus plus intelligents qui vont ensuite se traduire par une amélioration de la flexibilité, de la productivité et une meilleure utilisation des ressources matérielles et humaines.
Cet article est consacré à introduire cette nouvelle révolution industrielle (industrie4.0), les technologies majeurs participant à son apparition, leur bénéfices attendus ainsi que leurs enjeux à prendre en considération.
Industry 4.0 is a tsunami that will invade the whole world. The real challenge of the future factories requires a high degree of reliability both in machinery and equipment. Thereupon, shifting the rudder towards new trends is an inevitable obligation in this fourth industrial revolution where the maintenance system has radically changed to a new one called predictive maintenance 4.0 (PdM 4.0). This latter is used to avoid predicted problems of machines and increase their lifespan taking into account that if machines have not any predicted problem, they will never be checked. However, in order to get successful prediction of any kind of problems, minimizing energy and resources consumption along with saving costs, this PdM 4.0 needs many new emerging technologies such as the internet of things infrastructure, collection and distribution of data from different smart sensors, analyzing/interpreting a huge amount of data using machine/deep learning…etc. This paper is devoted to present the industry 4.0 and its specific technologies used to ameliorate the existing predictive maintenance strategy. An example is given via a web platform to get a clear idea of how PdM 4.0 is applied in smart factories.
Modern wind turbines operate in continuously transient conditions, with varying speed, torque, and power based on the stochastic nature of the wind resource. This variability affects not only the operational performance of the wind power system, but can also affect its integrity under service conditions. Condition monitoring continues to play an important role in achieving reliable and economic operation of wind turbines. This paper reviews the current advances in wind turbine condition monitoring, ranging from conventional condition monitoring and signal processing tools to machine-learning-based condition monitoring and usage of big data mining for predictive maintenance. A systematic review is presented of signal-based and data-driven modeling methodologies using intelligent and machine learning approaches, with the view to providing a critical evaluation of the recent developments in this area, and their applications in diagnosis, prognosis, health assessment, and predictive maintenance of wind turbines and farms.
Deep learning techniques have recently brought many improvements in the field of neural network training, especially for prognosis and health management. The success of such an intelligent health assessment model depends not only on the availability of labeled historical data but also on the careful samples selection. However, in real operating systems such as induction machines, which generally have a long reliable life, storing the entire operation history, including deterioration (i.e., bearings), will be very expensive and difficult to feed accurately into the training model. Other alternatives sequentially store samples that hold degradation patterns similar to real ones in damage behavior by imposing an accelerated deterioration. Labels lack and differences in distributions caused by the imposed deterioration will ultimately discriminate the training model and limit its knowledge capacity. In an attempt to overcome these drawbacks, a novel sequence-by-sequence deep learning algorithm able to expand the generalization capacity by transferring obtained knowledge from life cycles of similar systems is proposed. The new algorithm aims to determine health status by involving long short-term memory neural network as a primary component of adaptive learning to extract both health stage and health index inferences. Experimental validation performed using the PRONOSTIA induction machine bearing degradation datasets clearly proves the capacity and higher performance of the proposed deep learning knowledge transfer-based prognosis approach.
Digital twins have transformed the industrial world by changing the development phase of a product or the use of equipment. With the digital twin, the object’s evolution data allows us to anticipate and optimize its performance. Healthcare is in the midst of a digital transition towards personalized, predictive, preventive, and participatory medicine. The digital twin is one of the key tools of this change. In this work, DT is proposed for the diagnosis of breast cancer based on breast skin temperature. Research has focused on thermography as a non-invasive scanning solution for breast cancer diagnosis. However, body temperature is influenced by many factors, such as breast anatomy, physiological functions, blood pressure, etc. The proposed DT updates the bio-heat model’s temperature using the data collected by temperature sensors and complementary data from smart devices. Consequently, the proposed DT is personalized using the collected data to reflect the person’s behavior with whom it is connected.
Since bearing deterioration patterns are difficult to collect from real, long lifetime scenarios, data-driven research has been directed towards recovering them by imposing accelerated life tests. Consequently, insufficiently recovered features due to rapid damage propagation seem more likely to lead to poorly generalized learning machines. Knowledge-driven learning comes as a solution by providing prior assumptions from transfer learning. Likewise, the absence of true labels was able to create inconsistency related problems between samples, and teacher-given label behaviors led to more ill-posed predictors. Therefore, in an attempt to overcome the incomplete, unlabeled data drawbacks, a new autoencoder has been designed as an additional source that could correlate inputs and labels by exploiting label information in a completely unsupervised learning scheme. Additionally, its stacked denoising version seems to more robustly be able to recover them for new unseen data. Due to the non-stationary and sequentially driven nature of samples, recovered representations have been fed into a transfer learning, convolutional, long–short-term memory neural network for further meaningful learning representations. The assessment procedures were benchmarked against recent methods under different training datasets. The obtained results led to more efficiency confirming the strength of the new learning path.
Optical burst switching (OBS) has become one of the best and widely used optical networking techniques. It offers more efficient bandwidth usage than optical packet switching (OPS) and optical circuit switching (OCS).However, it undergoes more attacks than other techniques and the Classical security approach cannot solve its security problem. Therefore, a new security approach based on machine learning and cloud computing is proposed in this article. We used the Google Colab platform to apply Support Vector Machine (SVM) and Extreme Learning Machine (ELM)to Burst Header Packet (BHP) flooding attack on Optical Burst Switching (OBS) Network Data Set.