Nowadays, the real life constraints necessitates controlling modern machines using human intervention by means of sensorial organs. The voice is one of the human senses that can control/monitor modern interfaces. In this context, Automatic Speech Recognition is principally used to convert natural voice into computer text as well as to perform an action based on the instructions given by the human. In this paper, we propose a general framework for Arabic speech recognition that uses Long Short-Term Memory (LSTM) and Neural Network (Multi-Layer Perceptron: MLP) classifier to cope with the nonuniform sequence length of the speech utterances issued fromboth feature extraction techniques, (1)Mel Frequency Cepstral Coefficients MFCC (static and dynamic features), (2) the Filter Banks (FB) coefficients. The neural architecture can recognize the isolated Arabic speech via classification technique. The proposed system involves, first, extracting pertinent features from the natural speech signal using MFCC (static and dynamic features) and FB. Next, the extracted features are padded in order to deal with the non-uniformity of the sequences length. Then, a deep architecture represented by a recurrent LSTM or GRU (Gated Recurrent Unit) architectures are used to encode the sequences of MFCC/FB features as a fixed size vector that will be introduced to a Multi-Layer Perceptron network (MLP) to perform the classification (recognition). The proposed system is assessed using two different databases, the first one concerns the spoken digit recognition where a comparison with other related works in the literature is performed, whereas the second one contains the spoken TV commands. The obtained results show the superiority of the proposed approach.
Process controls (basic as well as advanced) are implemented within the process control system, which may mean a distributed control system (DCS), programmable logic controller (PLC), and/or a supervisory control computer. DCSs and PLCs are typically industrially hardened and fault-tolerant. Supervisory control computers are often not hardened or faulttolerant, but they bring a higher level of computational capability to the control system, to host valuable, but not critical , advanced control applications. Advanced controls may reside in either the DCS or the supervisory computer, depending on the application. Basic controls reside in the DCS and its subsystems, including PLCs. Because we usually deal with real - world systems with real - world constraints (cost, computer resources, size, weight, power, heat dissipation, etc.), it is understood that the simplest method to accomplish a task is the one that should be used. Experts usually rely on common sense when they solve problems. They also use vague and ambiguous terms. Other experts have no difficulties with understanding and interpreting this statement because they have the background to hearing problems described like this. However, a knowledge engineer would have difficulties providing a computer with the same level of understanding. In a complex industrial process, how can we represent expert knowledge that uses vague and fuzzy terms in a computer to control it? In this context, the application is developed to control the pretreatment and pasteurization station of milk localized in Batna (Algeria) by adopting a control approach based on expert knowledge and fuzzy logic. Keywords - Intelligent Control; Data Acquisition; Industrial Process Control; Fuzzy Control
Control and monitoring of current manufacturing systems has become increasingly a complex problem. To expand their reliability we propose in this work a distributed approach for control and monitoring using the Multi Agents Systems. This approach is based on the decomposition of the complex system into subsystems easier to manage, and the design of several agents each one on these agents is dedicated to a particular task. A software application supporting this approach is developed for the cement clinker system of the Ain Touta cement plant. It is chosen to test the approach on real data. The results show that our distributed approach produces better results than the centralized health monitoring and control.
Complex engineering manufacturing systems require efficient on-line fault diagnosis methodologies to improve safety and reduce maintenance costs. Traditionally, diagnosis and prognosis approaches are centralized, but these solutions are difficult to implement on increasingly prevalent distributed, networked embedded systems; whereas a distributed approach of multiple diagnosis and prognosis agents can offer a solution. Also, having the capability to control and observe process plant of a manufacturing system from a remote location has several benefits including the ability to track and to assist in solving a problem that might arise. This paper presents a distributed and over prognosis and diagnosis approach for physical systems basing on multi agent system and Service-Oriented Architecture. Specifics prognostic and diagnostic procedures and key modules of the architecture for Web Service-based Distributed Fault Prognostic and Diagnosis framework are detailed and developed for the preheater cement cyclones in the workshop of SCIMAT clinker. The experimental case study, reported in the present paper, shows encouraging results and fosters industrial technology transfer.
Accurate solar irradiance forecasts are now key to successfully integrate the (variable) production from large solar energy systems into the electricity grid. This paper describes a wrapper forecasting methodology for irradiance time series that combines mutual information and an Extreme Learning Machine (ELM), with application to short forecast horizons between 5-min and 3-h ahead. The method is referred to as Wrapper Mutual Information Methodology (WMIM). To evaluate the proposed approach, its performance is compared to that of three dimensionality reduction scenarios: full space (latest 50 variables), partial space (latest 5 variables), and the usual Principal Component Analysis (PCA). Based on measured irradiance data from two arid sites (Madina and Tamanrasset), the present results reveal that the reduction of the historical input space increases the forecasting performance of global solar radiation. In the case of Madina and forecast horizons from 5-min to 30-min ahead, the WMIM forecasts have a better coefficient of determination (R2 between 0.927 and 0.967) than those using the next best performing strategy, PCA (R2 between 0.921 and 0.959). The Mean Absolute Percentage Error (MAP) is also better for WMIM [7.4–10.77] than for PCA [8.4–11.55]. In the case of Tamanrasset and forecasting horizons from 1-h to 3-h ahead, the WMIM forecasts have an R2 between 0.883 and 0.957, slightly better than the next best performing strategy (PCA) (R2 between 0.873 and 0.910). The Normalized Mean Squared Error (NMSE) is similarly better for WMIM [0.048–0.128] than for PCA [0.105–0.130]. It is also found that the ELM technique is considerably more computationally efficient than the more conventional Multi Layer Perceptron (MLP). It is concluded that the proposed mutual information-based variable selection method has the potential to outperform various other proposed techniques in terms of prediction performance.