Solar energy is a vast and clean resource that can be harnessed with great benefit for humankind. It is still currently difficult, however, to convert it into electricity in an efficient and cost-effective way. One of the ways to produce energy is the use of various focusing technologies that concentrate the direct normal irradiance (DNI) to produce power through highly-efficient modules or conventional turbines. Concentrating technologies have great potential over arid areas, such as Northern Africa. A serious issue is that DNI can vary rapidly under broken-cloud conditions, which complicate its forecasts [1]. In comparison, the global horizontal irradiance (GHI) is much less sensitive to cloudiness. As an alternative to the direct DNI forecasting avenue, a possibility exists to derive the future DNI indirectly by forecasting GHI first, and then use a conventional separation model to derive DNI. In this context, the present study compares four of the most well-known separation models of the literature and evaluates their performance at Tamanrasset, Algeria, when used in combination with a new deep learning machine methodology introduced here to forecast GHI time series for short-term horizons (15-min). The proposed forecast system is composed of two separate blocs. The first bloc seeks to forecast the future value of GHI based on historical time series using the Long Short-Term Memory (LSTM) technique with two different search algorithms. In the second bloc, an appropriate separation (also referred to as “diffuse fraction” or “splitting”) model is implemented to extract the direct component of GHI. LSTMs constitute a category of recurrent neural network (RNN) structure that exhibits an excellent learning and predicting ability for data with time-series sequences [2]. The present study uses and evaluates the performance of two novel and competitive strategies, which both aim at providing accurate short-term GHI forecasts: Unidirectional LSTM (UniLSTM) and Bidirectional LSTM (BiLSTM). In the former case, the signal propagates backward or forward in time, whereas in the latter case the learning algorithm is fed with the GHI data once from beginning to the end and once from end to beginning. One goal of this study is to evaluate the overall advantages and performance of each strategy. Hence, this study aims to validate this new approach of obtaining 15- min DNI forecasts indirectly, using the most appropriate separation model. An important step here is to determine which model is suitable for the arid climate of Tamanrasset, a high-elevation site in southern Algeria where dust storms are frequent. Accordingly, four representative models have been selected here, based on their validation results [3] and popularity: 1) Erbs model [4]; 2) Maxwell’s DISC model [5]; 3) Perez’s DIRINT model [6]; and 4) Engerer2 model [7]. In this contribution, 1-min direct, diffuse and global solar irradiance measurements from the BSRN station of Tamanrasset are first quality-controlled with usual procedures [3, 8] and combined into 15-min sequences over the period 2013–2017. The four separation models are operated with the 15-min GHI forecasts obtained with each LSTM model, then compared to the 15-min measured DNI sequences. Table 1 shows the results obtained by the two forecasting strategies, for the experimental dataset.
Nowadays, solar energy, which is the direct conversion of light into electricity, occupies a very important place among renewable energy resources due to its daily availability in most regions of the globe. Therefore, the wise exploitation of this clean energy will ultimately drive to cover all needed demands [1, 2]. This paper deals with the design of Maximum Power Point Tracking (MPPT) technique for photovoltaic (PV) system using a modified incremental conductance (IncCond) algorithm to extract maximum power from PV module. The considered PV system consists of a PV module, a DC-DC converter and a resistive load. In the literature, it is known that the conventional MPPT algorithms suffer from serious disadvantages such as fluctuations around the MPP and slow tracking during a rapid change in atmospheric conditions. Therefore, in this paper, and attempting to overcome the shortcomings of conventional approach. In this work, a new modified incremental conductance algorithm is proposed to find the Maximum Power Point Tracking (MPPT) of the Photovoltaic System. Simulation tests with different atmospheric conditions are provided to demonstrate the validity and the effectiveness of the proposed algorithm.
This paper describes a new approach for hourly global solar radiation forecasting based on a hybrid artificial neural network technique combining a residual neural network (RESNET) for powerful feature extraction of the most relevant moments of the past, and a long short-term memory (LSTM) technique for efficient projection into the future. Based on 11 years of solar irradiance measurements at Tamanrasset, Algeria, four evaluation metrics are used to demonstrate the efficiency of the proposed method: coefficient of determination (R²), root-mean-square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). These metrics are also used to evaluate the performance of the model in comparison with two existing forecasting models used as benchmark: a particular technique of convolutional neural network (CNN) called 1-dimensional convolutional neural network (1D-CNN) and a conventional LSTM. The present results indicate that the proposed RESNET-LSTM model outperforms the other models in terms of all statistical indicators.
This paper proposes an optimum design of a diesel/PV/wind/battery hybrid renewable energy system (HRES) for rural electrification in a remote district in Tamanrasset, Algeria. In this study, a particle swarm optimization algorithm (PSO) has been proposed to solve a multi-objective optimization problem, which was created by carrying out simultaneously, the cost of energy (COE) minimization while maximizing the reliability of power supply described as the loss of power supply probability (LPSP) and a renewable fraction (RF). The simulation results show that the PV/WT/DG/BT is the best economic configuration with a reasonable annual cost of the optimal system (ACS) which is about 7798.71 $ and the COE equal to 0.79 $/kWh for an LPSP = 0.01%, where the ten households are 0.99 % satisfied by renewable energy sources.
Requirements of users in developing countries differ from those of developed countries. This difference can be seen through wheelchair displacement in infrastructures that don’t meet international standards. However, developing countries are obliged to purchase products from developed countries that don’t necessarily meet all user’s requirements. The modification of these requirements will generate disruption on all the supply chain. This paper proposes a model for optimising the cost of requirement modification on the supply chain and seeks to evaluate the introduction of a new requirement on an existing product/process. This model is adapted to the redesign and development of products, such as wheelchairs, satisfying specific Algerian end-user requirements.
A comparative study between a set of chosen machine learning tools for direct remaining useful life prediction is presented in this work. The main objective of this study is to select the appropriate prediction tool for health estimation of aircraft engines for future uses. The training algorithms are evaluated using “time-varying” data retrieved from Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) developed by NASA. The training and testing processes of each algorithm are carried out under the same circumstances using the similar initial condition and evaluation sets. The results prove that among the studied training tools, Support vector machine (SVM) achieved the best results.
Condition Monitoring of photovoltaic systems plays an important role in maintenance interventions due to its ability to solve problems of loss of energy production revenue. Nowadays, machine learning-based failure diagnosis is becoming increasingly growing as an alternative to various difficult physical-based interpretations and the main pile foundation for condition monitoring. As a result, several methods with different learning paradigms (e.g. deep learning, transfer learning, reinforcement learning, ensemble learning, etc.) have been used to address different condition monitoring issues. Therefore, the aim of this paper is at least, to shed light on the most relevant work that has been done so far in the field of photovoltaic systems machine learning-based condition monitoring.
One of the main data-driven challenges when assessing bearing health is that training and test samples must be drawn from the same probability distribution. Indeed, it is difficult and almost rare to witness such a phenomenon in practical applications due to the constantly changing working conditions of rotating machines. In addition, collecting sufficient deterioration samples from the bearing life cycle is not possible due to the huge memory requirements and processing costs. As a result, accelerated life tests are believed to be the primary alternatives to such a situation. However, and unfortunately, the recorded samples always are subject to lack of real patterns. Therefore, in this paper, a transfer learning approach is performed to solve such kind of problem where PRONOSTICO dataset is used to assess the current procedures.
As commonly known that living beings cannot survive without natural sources available on earth, technology is no exception; it cannot develop without the inspiring help given by the same nature.
The field of biology has extensively participated in the computing field through the "code of life" DNA (Deoxyribo Nucleic Acid) since it was discovered by Adelman in the past century. This combination gave birth to DNA Computing, which is a very interesting new aspect of biochemistry. It works massively parallel with high energy efficiency, and requiring almost no space.
The field of molecular computing is still new and as the field progresses from concepts to engineering, researchers will address these important issues.
By the use of encoding data into DNA strands, many NP-complete problems have been solved and many new efficient techniques have been proposed in cryptography field.
The aim of this paper is to give an overview of bio-inspired system and to summarize the great role of DNA molecule in servicing of the technology field.
In this paper, we address the integration of a two-level supply chain with multiple items. This two-level production-distribution system features a capacitated production facility supplying several retailers located in the same region. If production does occur, this process incurs a fixed setup cost and unit production costs. Besides, deliveries are made from the plant to the retailers by a limited number of capacitated vehicles, routing costs incurred. This work aims to implement a minimization solution that reduces the total costs in both the production facility and retailers. The methodology adopted based on a hybrid heuristic, greedy and genetic algorithm uses strong formulation to provide a suitable solution of a guaranteed quality that is as good or better than those provided by the MIP optimizer. The results demonstrate that the proposed heuristics are effective and performs impressively in terms of computational efficiency and solution quality.
Inventory management in distribution networks remains a challenging task due to the demand nature and the limited storage capacity. In this work, we study a three-level, a multi-product and a multi-period distribution network consisting of a central warehouse, three distribution centres and six wholesalers. Each of them faces a random demand. In order to optimise the inventory management in the distribution network, we first propose to make a horizontal cooperation between actors of the same level in the form of product exchange; then we propose a second approach based on vertical-horizontal cooperation. Both approaches are modelled as a MIP model and solved using the CPLEX solver. The objective of this study is to analyse the performance in terms of costs, quantities in stock and customer satisfaction.
La quatrième révolution industrielle (nommée aussi l’Internet Industriel des Objets) dépend totalement sur la numérisation à travers l’Internet des objets et les réseaux virtuels. Cette révolution qui évolue à un rythme exponentiel, et non plus linéaire, va permettre la création d’usines, d’industries et de processus plus intelligents qui vont ensuite se traduire par une amélioration de la flexibilité, de la productivité et une meilleure utilisation des ressources matérielles et humaines.
Cet article est consacré à introduire cette nouvelle révolution industrielle (industrie4.0), les technologies majeurs participant à son apparition, leur bénéfices attendus ainsi que leurs enjeux à prendre en considération.
Industry 4.0 is a tsunami that will invade the whole world. The real challenge of the future factories requires a high degree of reliability both in machinery and equipment. Thereupon, shifting the rudder towards new trends is an inevitable obligation in this fourth industrial revolution where the maintenance system has radically changed to a new one called predictive maintenance 4.0 (PdM 4.0). This latter is used to avoid predicted problems of machines and increase their lifespan taking into account that if machines have not any predicted problem, they will never be checked. However, in order to get successful prediction of any kind of problems, minimizing energy and resources consumption along with saving costs, this PdM 4.0 needs many new emerging technologies such as the internet of things infrastructure, collection and distribution of data from different smart sensors, analyzing/interpreting a huge amount of data using machine/deep learning…etc. This paper is devoted to present the industry 4.0 and its specific technologies used to ameliorate the existing predictive maintenance strategy. An example is given via a web platform to get a clear idea of how PdM 4.0 is applied in smart factories.
Modern wind turbines operate in continuously transient conditions, with varying speed, torque, and power based on the stochastic nature of the wind resource. This variability affects not only the operational performance of the wind power system, but can also affect its integrity under service conditions. Condition monitoring continues to play an important role in achieving reliable and economic operation of wind turbines. This paper reviews the current advances in wind turbine condition monitoring, ranging from conventional condition monitoring and signal processing tools to machine-learning-based condition monitoring and usage of big data mining for predictive maintenance. A systematic review is presented of signal-based and data-driven modeling methodologies using intelligent and machine learning approaches, with the view to providing a critical evaluation of the recent developments in this area, and their applications in diagnosis, prognosis, health assessment, and predictive maintenance of wind turbines and farms.
Deep learning techniques have recently brought many improvements in the field of neural network training, especially for prognosis and health management. The success of such an intelligent health assessment model depends not only on the availability of labeled historical data but also on the careful samples selection. However, in real operating systems such as induction machines, which generally have a long reliable life, storing the entire operation history, including deterioration (i.e., bearings), will be very expensive and difficult to feed accurately into the training model. Other alternatives sequentially store samples that hold degradation patterns similar to real ones in damage behavior by imposing an accelerated deterioration. Labels lack and differences in distributions caused by the imposed deterioration will ultimately discriminate the training model and limit its knowledge capacity. In an attempt to overcome these drawbacks, a novel sequence-by-sequence deep learning algorithm able to expand the generalization capacity by transferring obtained knowledge from life cycles of similar systems is proposed. The new algorithm aims to determine health status by involving long short-term memory neural network as a primary component of adaptive learning to extract both health stage and health index inferences. Experimental validation performed using the PRONOSTIA induction machine bearing degradation datasets clearly proves the capacity and higher performance of the proposed deep learning knowledge transfer-based prognosis approach.
Digital twins have transformed the industrial world by changing the development phase of a product or the use of equipment. With the digital twin, the object’s evolution data allows us to anticipate and optimize its performance. Healthcare is in the midst of a digital transition towards personalized, predictive, preventive, and participatory medicine. The digital twin is one of the key tools of this change. In this work, DT is proposed for the diagnosis of breast cancer based on breast skin temperature. Research has focused on thermography as a non-invasive scanning solution for breast cancer diagnosis. However, body temperature is influenced by many factors, such as breast anatomy, physiological functions, blood pressure, etc. The proposed DT updates the bio-heat model’s temperature using the data collected by temperature sensors and complementary data from smart devices. Consequently, the proposed DT is personalized using the collected data to reflect the person’s behavior with whom it is connected.
Since bearing deterioration patterns are difficult to collect from real, long lifetime scenarios, data-driven research has been directed towards recovering them by imposing accelerated life tests. Consequently, insufficiently recovered features due to rapid damage propagation seem more likely to lead to poorly generalized learning machines. Knowledge-driven learning comes as a solution by providing prior assumptions from transfer learning. Likewise, the absence of true labels was able to create inconsistency related problems between samples, and teacher-given label behaviors led to more ill-posed predictors. Therefore, in an attempt to overcome the incomplete, unlabeled data drawbacks, a new autoencoder has been designed as an additional source that could correlate inputs and labels by exploiting label information in a completely unsupervised learning scheme. Additionally, its stacked denoising version seems to more robustly be able to recover them for new unseen data. Due to the non-stationary and sequentially driven nature of samples, recovered representations have been fed into a transfer learning, convolutional, long–short-term memory neural network for further meaningful learning representations. The assessment procedures were benchmarked against recent methods under different training datasets. The obtained results led to more efficiency confirming the strength of the new learning path.
Optical burst switching (OBS) has become one of the best and widely used optical networking techniques. It offers more efficient bandwidth usage than optical packet switching (OPS) and optical circuit switching (OCS).However, it undergoes more attacks than other techniques and the Classical security approach cannot solve its security problem. Therefore, a new security approach based on machine learning and cloud computing is proposed in this article. We used the Google Colab platform to apply Support Vector Machine (SVM) and Extreme Learning Machine (ELM)to Burst Header Packet (BHP) flooding attack on Optical Burst Switching (OBS) Network Data Set.
Nowadays, machine learning has emerged as a promising alternative for condition monitoring of industrial processes, making it indispensable for maintenance planning. Such a learning model is able to assess health states in real time provided that both training and testing samples are complete and have the same probability distribution. However, it is rare and difficult in practical applications to meet these requirements due to the continuous change in working conditions. Besides, conventional hyperparameters tuning via grid search or manual tuning requires a lot of human intervention and becomes inflexible for users. Two objectives are targeted in this work. In an attempt to remedy the data distribution mismatch issue, we firstly introduce a feature extraction and selection approach built upon correlation analysis and dimensionality reduction. Secondly, to diminish human intervention burdens, we propose an Automatic artificial Neural network with an Augmented Hidden Layer (Auto-NAHL) for the classification of health states. Within the designed network, it is worthy to mention that the novelty of the implemented neural architecture is attributed to the new multiple feature mappings of the inputs, where such configuration allows the hidden layer to learn multiple representations from several random linear mappings and produce a single final efficient representation. Hyperparameters tuning including the network architecture, is fully automated by incorporating Particle Swarm Optimization (PSO) technique. The designed learning process is evaluated on a complex industrial plant as well as various classification problems. Based on the obtained results, it can be claimed that our proposal yields better response to new hidden representations by obtaining a higher approximation compared to some previous works.