La surveillance est la fonction d'observer toutes activités humaine ou environnementales dans le but de superviser, contrôler ou même réagir sur un cas particulier; ce qu’on appelle la supervision ou le monitoring. La technologie de la radio-identification, connue sous l’abréviation RFID (de l’anglais Radio Frequency IDentification), est l’une des technologies utilisées pour récupérer des données à distance de les mémoriser et même de les traiter. C’est une technologie d’actualité et l’une des technologies de l’industrie 4.0 qui s'intègre dans de nombreux domaines de la vie quotidienne notamment la surveillance et le contrôle d’accès. L’objectif de cet article est de montrer comment protéger et surveiller en temps réel des zones industrielles critiques et de tous types d'accès non autorisés de toute personne (employés, visiteurs…) en utilisant la technologie RFID et cela à travers des exemples de simulation à l'aide d’un simulateur dédié aux réseaux de capteurs.
In this study, we investigate a production planning problem in hybrid manufacturing remanufacturing production system. The objective is the determine the best mix between the manufacturing of new products, and the remanufacturing of recovered products, based on economic and environmental considerations. It consists to determine the best manufacturing and remanufacturing plans to minimising the total economic cost (start-up and production costs of new and remanufactured products, storage costs of new and returned products and disposal costs) and the carbon emissions (new products, remanufactured products and disposed products). The hybrid system consists of a set of machines used to produce new products and remanufactured products of different grades (qualities). We assume that remanufacturing is more environmentally efficient, because it allows to reduce the disposal of used products. A multi-objective mathematical model is developed, and a non dominated sorting genetic algorithm (NSGA-II) based approach is proposed. Numerical experience is presented to study the impact of carbon emissions generated by new, remanufactured and disposed products, over a production horizon of several periods.
Purpose
The purpose of this paper is to develop a model for sustainable manufacturing by adopting a combined approach using AHP, fuzzy TOPSIS and fuzzy EDAS methods. The proposed model aims to identify and prioritize the sustainable factors and technical requirements that help in improving the sustainability of manufacturing processes.
Design/methodology/approach
The proposed approach integrates both AHP, Fuzzy EDAS and Fuzzy TOPSIS. AHP method is used to generate the weights of the sustainable factors. Fuzzy EDAS and Fuzzy TOPSIS are applied to rank and determine the application priority of a set of improvement approaches. The ranks carried out from each MCDM approach is assessed by computing the spearman's correlation coefficient.
Findings
The results reveal the proposed model is efficient in sustainable factors and the technical requirements prioritizing. In addition, the results carried out from this study indicate the high efficiency of AHP, Fuzzy EDAS and Fuzzy TOPSIS in decision making. Besides, the results indicate that the model provides a useable methodology for managers' staff to select the desirable sustainable factors and technical requirements for sustainable manufacturing.
Research limitations/implications
The main limitation of this paper is that the proposed approach investigates an average number of factors and technical requirements.
Originality/value
This paper investigates an integrated MCDM approach for sustainable factors and technical requirements prioritization. In addition, the presented work pointed out that AHP, Fuzzy EDAS and Fuzzy TOPSIS approach can manipulate several conflict attributes in a sustainable manufacturing context.
The purpose of this paper is to propose an improvement strategy based on multi-criteria decision making approaches, including fuzzy analytic hierarchy process (AHP), preference ranking organisation method for enrichment evaluation II (PROMETHEE) and višekriterijumsko kompromisno rangiranje (VIKOR) for the objective of simplifying and organising the improvement process in complex manufacturing processes. Firstly, the proposed strategy started with the selection of decision makers', such as company leaders, to determine performance indicators. Then fuzzy AHP is used to quantify the weight of each defined indicators. Finally, the weights carried out from fuzzy AHP approach are used as input in VIKOR and PROMETHE II to rank the operations according to their improvement priority. The results obtained from each outranking method are compared and the best method is determined.
Logistics is one of the main tactics that countries and businesses are improving in order to increase profits. Another prominent theme in today’s logistics is emerging technologies. Today’s developments in logistics and industry are how to profit from collected and accessible data to use it in various processes such as decision making, production plan, logistics delivery programming, and so on, and more specifically deep learning methods. The aim of this paper is to identify the various applications of deep learning in logistics through a systematic literature review. A set of research questions had been identified to be answered by this article.
Strengthening production plants and process control functions contribute to a global improvement of manufacturing systems because of their cross-functional characteristics in the industry. Companies established various innovative and operational strategies; there is increasing competitiveness among them and increasing companies’ value. Machine learning (ML) techniques become an intelligent enticing option to address industrial issues in the current manufacturing sector since the emergence of Industry 4.0 and the extensive integration of paradigms such as big data and high computational power. Implementing a system able to identify faults early to avoid critical situations in the production line and its environment is crucial. Therefore, powerful machine learning algorithms are performed for fault diagnosis, real-time data classification, and predicting the state of functioning of the production line. Random forests proved to be a better classifier with an accuracy of 97%, compared to the SVM model’s accuracy which is 94.18%. However, the K-NN model’s accuracy is about 93.83%. An accuracy of 80.25% is achieved by the logistic regression model. About 83.73% is obtained by the decision tree’s model. The excellent experimental results reached on the random forest model demonstrated the merits of this implementation in the production performance, ensuring predictive maintenance and avoiding wasting energy.
Smart grid is an emerging system providing many benefits in digitizing the traditional power distribution systems. However, the added benefits of digitization and the use of the Internet of Things (IoT) technologies in smart grids also poses threats to its reliable continuous operation due to cyberattacks. Cyber–physical smart grid systems must be secured against increasing security threats and attacks. The most widely studied attacks in smart grids are false data injection attacks (FDIA), denial of service, distributed denial of service (DDoS), and spoofing attacks. These cyberattacks can jeopardize the smooth operation of a smart grid and result in considerable economic losses, equipment damages, and malicious control. This paper focuses on providing an extensive survey on defense mechanisms that can be used to detect these types of cyberattacks and mitigate the associated risks. The future research directions are also provided in the paper for efficient detection and prevention of such cyberattacks.
Many cloud providers offer very high precision services to exploit Optical Character Recognition (OCR). However, there is no provider offers Tifinagh Optical Character Recognition (OCR) as Web Services. Several works have been proposed to build powerful Tifinagh OCR. Unfortunately, there is no one developed as a Web Service. In this paper, we present a new architecture of Tifinagh Handwriting Recognition as a web service based on a deep learning model via Google Colab. For the implementation of our proposal, we used the new version of the TensorFlow library and a very large database of Tifinagh characters composed of 60,000 images from the Mohammed Vth University in Rabat. Experimental results show that the TensorFlow library based on a Tensor processing unit constitutes a very promising framework for developing fast and very precise Tifinagh OCR web services. The results show that our method based on convolutional neural network outperforms existing methods based on support vector machines and extreme learning machine.
Fuel cell technology has been rapidly developed in the last decade owing to its clean characteristic and high efficiency. Proton exchange membrane fuel cells (PEMFCs) are increasingly used in transportation applications and small stationary applications; however, the cost and the unsatisfying durability of the PEMFC stack have limited their successful commercialization and market penetration. In recent years, thanks to the availability and the quality of emerging data of PEMFCs, digitization is happening to offer possibilities to increase the productivity and the flexibility in fuel cell applications. Therefore, it is crucial to clarify the potential of digitization measures, how and where they can be applied, and their benefits. This paper focuses on the degradation performance of the PEMFC stacks and develops a data-driven intelligent method to predict both the short-term and long-term degradation. The dilated convolutional neural network is for the first time applied for predicting the time-dependent fuel cell performance and is proved to be more efficient than other recurrent networks. To deal with the long-term performance uncertainty, a conditional neural network is proposed. Results have shown that the proposed method can predict not only the degradation tendency, but also contain the degradation behaviour dynamics.
Many machine learning-based methods have been widely applied to Coronary Artery Disease (CAD) and are achieving high accuracy. However, they are black-box methods that are unable to explain the reasons behind the diagnosis. The trade-off between accuracy and interpretability of diagnosis models is important, especially for human disease. This work aims to propose an approach for generating rule-based models for CAD diagnosis. The classification rule generation is modeled as combinatorial optimization problem and it can be solved by means of metaheuristic algorithms. Swarm intelligence algorithms like Equilibrium Optimizer Algorithm (EOA) have demonstrated great performance in solving different optimization problems. Our present study comes up with a Novel Discrete Equilibrium Optimizer Algorithm (NDEOA) for the classification rule generation from training CAD dataset. The proposed NDEOA is a discrete version of EOA, which use a discrete encoding of a particle for representing a classification rule; new discrete operators are also defined for the particle’s position update equation to adapt real operators to discrete space. To evaluate the proposed approach, the real world Z-Alizadeh Sani dataset has been employed. The proposed approach generate a diagnosis model composed of 17 rules, among them, five rules for the class “Normal” and 12 rules for the class “CAD”. In comparison to nine black-box and eight white-box state-of-the-art approaches, the results show that the generated diagnosis model by the proposed approach is more accurate and more interpretable than all white-box models and are competitive to the black-box models. It achieved an overall accuracy, sensitivity and specificity of 93.54%, 80% and 100% respectively; which show that, the proposed approach can be successfully utilized to generate efficient rule-based CAD diagnosis models.
The grasshopper optimization algorithm is one of the recently population-based optimization techniques inspired by the behaviours of grasshoppers in nature. It is an efficient optimization algorithm and since demonstrates excellent performance in solving continuous problems, but cannot resolve directly binary optimization problems. Many optimization problems have been modelled as binary problems since their decision variables varied in binary space such as feature selection in data classification. The main goal of feature selection is to find a small size subset of feature from a sizeable original set of features that optimize the classification accuracy. In this paper, a new binary variant of the grasshopper optimization algorithm is proposed and used for the feature subset selection problem. This proposed new binary grasshopper optimization algorithm is tested and compared to five well-known swarm-based algorithms used in feature selection problem. All these algorithms are implemented and experimented assessed on twenty data sets with various sizes. The results demonstrated that the proposed approach could outperform the other tested methods.