Nowadays, the real life constraints necessitatescontrolling modern machines using human interventionby means of sensorial organs. The voice is one of the hu-man senses that can control/monitor modern interfaces.In this context, Automatic Speech Recognition is princi-pally used to convert natural voice into computer text aswell as to perform an action based on the instructionsgiven by the human. In this paper, we propose a generalframework for Arabic speech recognition that uses LongShort-Term Memory (LSTM) and Neural Network (Multi-Layer Perceptron: MLP) classifier to cope with the non-uniform sequence length of the speech utterances issuedfrom both feature extraction techniques, (1) Mel FrequencyCepstral Coefficients MFCC (static and dynamic features),(2) the Filter Banks (FB) coefficients. The neural architec-ture can recognize the isolated Arabic speech via classifi-cation technique. The proposed system involves, first, ex-tracting pertinent features from the natural speech signalusing MFCC (static and dynamic features) and FB. Next,the extracted features are padded in order to deal with thenon-uniformity of the sequences length. Then, a deep ar-chitecture represented by a recurrent LSTM or GRU (GatedRecurrent Unit) architectures are used to encode the se-quences of MFCC/FB features as a fixed size vector that willbe introduced to a Multi-Layer Perceptron network (MLP)to perform the classification (recognition). The proposedsystem is assessed using two different databases, the firstone concerns the spoken digit recognition where a com-parison with other related works in the literature is per-formed, whereas the second one contains the spoken TVcommands. The obtained results show the superiority ofthe proposed approach.
Accurate solar irradiance forecasts are now key to successfully integrate the (variable) production from large solar energy systems into the electricity grid. This paper describes a wrapper forecasting methodology for irradiance time series that combines mutual information and an Extreme Learning Machine (ELM), with application to short forecast horizons between 5-min and 3-h ahead. The method is referred to as Wrapper Mutual Information Methodology (WMIM). To evaluate the proposed approach, its performance is compared to that of three dimensionality reduction scenarios: full space (latest 50 variables), partial space (latest 5 variables), and the usual Principal Component Analysis (PCA). Based on measured irradiance data from two arid sites (Madina and Tamanrasset), the present results reveal that the reduction of the historical input space increases the forecasting performance of global solar radiation. In the case of Madina and forecast horizons from 5-min to 30-min ahead, the WMIM forecasts have a better coefficient of determination (R2 between 0.927 and 0.967) than those using the next best performing strategy, PCA (R2 between 0.921 and 0.959). The Mean Absolute Percentage Error (MAP) is also better for WMIM [7.4–10.77] than for PCA [8.4–11.55]. In the case of Tamanrasset and forecasting horizons from 1-h to 3-h ahead, the WMIM forecasts have an R2 between 0.883 and 0.957, slightly better than the next best performing strategy (PCA) (R2 between 0.873 and 0.910). The Normalized Mean Squared Error (NMSE) is similarly better for WMIM [0.048–0.128] than for PCA [0.105–0.130]. It is also found that the ELM technique is considerably more computationally efficient than the more conventional Multi Layer Perceptron (MLP). It is concluded that the proposed mutual information-based variable selection method has the potential to outperform various other proposed techniques in terms of prediction performance.
This contribution proposes a novel solar time series forecasting approach based on multimodel statistical ensembles to predict global horizontal irradiance (GHI) in short-term horizons (up to 1 hour ahead). The goal of the proposed methodology is to exploit the diversity of a set of dissimilar predictors with the purpose of increasing the accuracy of the forecasting process. The performance of a specific multimodel ensemble forecast showing an improved forecast skill is demonstrated and compared to a variety of individual single models. The proposed system can be applied in two distinct ways. The first one incorporates the forecasts acquired from the different forecasting models constituting the ensemble via a linear combination (combination-based). The other one consists of a novel methodology that delivers as output the forecast provided by the specific model (involved in the ensemble) that delivers the maximum precision in the zone of the variable space connected with the considered GHI time series (selection-based approach). This forecasting model is issued from an appropriate division of the variable space. The efficiency of the proposed methodology has been evaluated using high-quality measurements carried out at 1min intervals at four radiometric sites representing widely different radiative climates (Arid, Temperate, Tropical, and High Albedo). The obtained results emphasize that, at all sites, the proposed multi-model ensemble is able to increase the accuracy of the forecasting process using the different combination approaches, with a significant performance improvement when using the classification strategy.
The traditional detection methods have the disadvantages of radiation exposure, high cost, and shortage of medical resources, which restrict the popularity of early screening for breast cancer. An inexpensive, accessible, and friendly way to detect is urgently needed. Infrared thermography, an emerging means to breast cancer detection, is extremely sensitive to tissue abnormalities caused by inflammation and vascular proliferation. In this work, combined with the temperature and texture features, we designed a breast cancer detection system based on smart phone with infrared camera, achieving the accuracy of 99.21 % with the k-Nearest Neighbor classifier. We compared the diagnostic results of the low resolution, originated from the phone camera, with the high resolution of the conventional infrared camera. It was found that the accuracy and sensitivity decreased slightly, but both of them were over than 98 %. The proposed breast cancer detection system not only has excellent performance but also dramatically saves the detection cost, and its prospect will be fascinating.
Control and monitoring of current manufacturing systems has become increasingly a complex problem. To expand their reliability we propose in this work a distributed approach for control and monitoring using the Multi Agents Systems. This approach is based on the decomposition of the complex system into subsystems easier to manage, and the design of several agents each one on these agents is dedicated to a particular task. A software application supporting this approach is developed for the cement clinker system of the Ain Touta cement plant. It is chosen to test the approach on real data. The results show that our distributed approach produces better results than the centralized health monitoring and control.
Complex engineering manufacturing systems require efficient online fault diagnosis methodologies to improve safety and reduce maintenance costs. Traditionally, diagnosis and prognosis approaches are centralised, but these solutions are difficult to implement on distributed systems; whereas a distributed approach of multiple diagnosis and prognosis agents can offer a solution. Also, controlling process plant from a remote location has several benefits including the ability to track and to assist in solving a problem that might arise. This paper presents a distributed and over prognosis and diagnosis approach for physical systems basing on multi agent system and service-oriented architecture. Specifics prognostic and diagnostic procedures and key modules of the architecture for web service-based distributed fault prognostic and diagnosis framework are detailed and developed for the preheater cement cyclones in the workshop of SCIMAT clinker. The experimental case study, reported in the present paper, shows encouraging results and fosters industrial technology transfer.
Fault prognosis in industrial plants is a complex problem, and time is an important factor for the resolution of this problem. The main indicator for the task of fault prognosis is the estimate of remaining useful life (RUL), which essentially depends on the predicted time to failure. This paper introduces a temporal neuro-fuzzy system (TNFS) for performing the fault prognosis task and exactly estimating the RUL of preheater cyclones in a cement plant. The main component of the TNFS is a set of temporal fuzzy rules that have been chosen for their ability to explain the behavior of the entire system, the components’ degradation, and the RUL estimation. The benefit of introducing time in the structure of fuzzy rules is that a local memory of the TNFS is created to capture the dynamics of the prognostic task. More precisely, the paper emphasizes improving the performance of TNFSs for prediction. The RUL estimation process is broken down into four generic processes: building a predictive model, selecting the most critical parameters, training the TNFS, and predicting RUL through the generated temporal fuzzy rules. Finally, the performance of the proposed TNFS is evaluated using a real preheater cement cyclone dataset. The results show that our TNFS produces better results than classical neuro-fuzzy systems and neural networks.
Nowadays, the real life constraints necessitates controlling modern machines using human intervention by means of sensorial organs. The voice is one of the human senses that can control/monitor modern interfaces. In this context, Automatic Speech Recognition is principally used to convert natural voice into computer text as well as to perform an action based on the instructions given by the human. In this paper, we propose a general framework for Arabic speech recognition that uses Long Short-Term Memory (LSTM) and Neural Network (Multi-Layer Perceptron: MLP) classifier to cope with the nonuniform sequence length of the speech utterances issued fromboth feature extraction techniques, (1)Mel Frequency Cepstral Coefficients MFCC (static and dynamic features), (2) the Filter Banks (FB) coefficients. The neural architecture can recognize the isolated Arabic speech via classification technique. The proposed system involves, first, extracting pertinent features from the natural speech signal using MFCC (static and dynamic features) and FB. Next, the extracted features are padded in order to deal with the non-uniformity of the sequences length. Then, a deep architecture represented by a recurrent LSTM or GRU (Gated Recurrent Unit) architectures are used to encode the sequences of MFCC/FB features as a fixed size vector that will be introduced to a Multi-Layer Perceptron network (MLP) to perform the classification (recognition). The proposed system is assessed using two different databases, the first one concerns the spoken digit recognition where a comparison with other related works in the literature is performed, whereas the second one contains the spoken TV commands. The obtained results show the superiority of the proposed approach.
Process controls (basic as well as advanced) are implemented within the process control system, which may mean a distributed control system (DCS), programmable logic controller (PLC), and/or a supervisory control computer. DCSs and PLCs are typically industrially hardened and fault-tolerant. Supervisory control computers are often not hardened or faulttolerant, but they bring a higher level of computational capability to the control system, to host valuable, but not critical , advanced control applications. Advanced controls may reside in either the DCS or the supervisory computer, depending on the application. Basic controls reside in the DCS and its subsystems, including PLCs. Because we usually deal with real - world systems with real - world constraints (cost, computer resources, size, weight, power, heat dissipation, etc.), it is understood that the simplest method to accomplish a task is the one that should be used. Experts usually rely on common sense when they solve problems. They also use vague and ambiguous terms. Other experts have no difficulties with understanding and interpreting this statement because they have the background to hearing problems described like this. However, a knowledge engineer would have difficulties providing a computer with the same level of understanding. In a complex industrial process, how can we represent expert knowledge that uses vague and fuzzy terms in a computer to control it? In this context, the application is developed to control the pretreatment and pasteurization station of milk localized in Batna (Algeria) by adopting a control approach based on expert knowledge and fuzzy logic. Keywords - Intelligent Control; Data Acquisition; Industrial Process Control; Fuzzy Control
Control and monitoring of current manufacturing systems has become increasingly a complex problem. To expand their reliability we propose in this work a distributed approach for control and monitoring using the Multi Agents Systems. This approach is based on the decomposition of the complex system into subsystems easier to manage, and the design of several agents each one on these agents is dedicated to a particular task. A software application supporting this approach is developed for the cement clinker system of the Ain Touta cement plant. It is chosen to test the approach on real data. The results show that our distributed approach produces better results than the centralized health monitoring and control.
Complex engineering manufacturing systems require efficient on-line fault diagnosis methodologies to improve safety and reduce maintenance costs. Traditionally, diagnosis and prognosis approaches are centralized, but these solutions are difficult to implement on increasingly prevalent distributed, networked embedded systems; whereas a distributed approach of multiple diagnosis and prognosis agents can offer a solution. Also, having the capability to control and observe process plant of a manufacturing system from a remote location has several benefits including the ability to track and to assist in solving a problem that might arise. This paper presents a distributed and over prognosis and diagnosis approach for physical systems basing on multi agent system and Service-Oriented Architecture. Specifics prognostic and diagnostic procedures and key modules of the architecture for Web Service-based Distributed Fault Prognostic and Diagnosis framework are detailed and developed for the preheater cement cyclones in the workshop of SCIMAT clinker. The experimental case study, reported in the present paper, shows encouraging results and fosters industrial technology transfer.
Accurate solar irradiance forecasts are now key to successfully integrate the (variable) production from large solar energy systems into the electricity grid. This paper describes a wrapper forecasting methodology for irradiance time series that combines mutual information and an Extreme Learning Machine (ELM), with application to short forecast horizons between 5-min and 3-h ahead. The method is referred to as Wrapper Mutual Information Methodology (WMIM). To evaluate the proposed approach, its performance is compared to that of three dimensionality reduction scenarios: full space (latest 50 variables), partial space (latest 5 variables), and the usual Principal Component Analysis (PCA). Based on measured irradiance data from two arid sites (Madina and Tamanrasset), the present results reveal that the reduction of the historical input space increases the forecasting performance of global solar radiation. In the case of Madina and forecast horizons from 5-min to 30-min ahead, the WMIM forecasts have a better coefficient of determination (R2 between 0.927 and 0.967) than those using the next best performing strategy, PCA (R2 between 0.921 and 0.959). The Mean Absolute Percentage Error (MAP) is also better for WMIM [7.4–10.77] than for PCA [8.4–11.55]. In the case of Tamanrasset and forecasting horizons from 1-h to 3-h ahead, the WMIM forecasts have an R2 between 0.883 and 0.957, slightly better than the next best performing strategy (PCA) (R2 between 0.873 and 0.910). The Normalized Mean Squared Error (NMSE) is similarly better for WMIM [0.048–0.128] than for PCA [0.105–0.130]. It is also found that the ELM technique is considerably more computationally efficient than the more conventional Multi Layer Perceptron (MLP). It is concluded that the proposed mutual information-based variable selection method has the potential to outperform various other proposed techniques in terms of prediction performance.