A Soft Computing Approach for Fault Prediction of Electronic Systems

Size: px
Start display at page:

Download "A Soft Computing Approach for Fault Prediction of Electronic Systems"

Transcription

1 A Soft Computing Approach for Fault Prediction of Electronic Systems Ajith Abraham School of Computing & Information Technology Monash University (Gippsland Campus), Victoria 3842, Australia Abstract This paper presents a soft computing approach for intelligent online performance monitoring of electronic circuits and systems. Reliability modeling of electronic circuits can be best performed by the stressor susceptibility interaction model. A circuit or a system is deemed to be failed once the stressor has exceeded the susceptibility limits. For online prediction, validated stressor vectors may be obtained by direct measurements or sensors, which after preprocessing and standardization is fed into the neuro-fuzzy model. For comparison purpose, we also trained a artificial neural network using backpropagation learning and evaluated the comparative performance. The performance of the proposed method of prediction is evaluated by comparing the experimental results with the actual failure model values. Test results reveal that neuro-fuzzy models outperformed neural network in terms of performance time and error achieved. Keywords: Soft computing, hybrid system, neuro-fuzzy, artificial neural network, stressor, susceptibility, Monte Carlo analysis, failure prediction.. Introduction Real time monitoring of the healthiness of complex electronic systems/ circuits is a difficult challenge to both human operators and expert systems. When the electronic circuit or system is controlling a critical task fault prediction will be very important. Soft computing was first proposed by Zadeh [4] to construct new generation computationally intelligent hybrid systems consisting of neural networks, fuzzy inference system, approximate reasoning and derivative free optimization techniques. It is well known that the intelligent systems, which can provide human like expertise such as domain knowledge, uncertain reasoning, and adaptation to a noisy and time varying environment, are important in tackling practical computing problems. In contrast with conventional Artificial Intelligence (AI) techniques which only deal with precision, certainty and rigor the guiding principle of hybrid systems is to exploit the tolerance for imprecision, uncertainty, low solution cost, robustness, partial truth to achieve tractability, and better rapport with reality. The new technique of electronic system failure prediction using stressor- susceptibility interaction is briefly discussed in Section 2 [][2], [5]. Section 3 presents practical methods for obtaining stressor sets. The derivation of stressor sets using Monte Carlo Analysis is given in Section 4 followed by Section 5 wherein we had derived a stressor-susceptibility model for a circuit. Section 6 and 7 gives some theoretical background about neuro-fuzzy systems and artificial neural networks. In section 8 we have reported the experimentation results and finally conclusions are provided in Section Stressor-Susceptibility Interaction Stressor is a physical entity influencing the lifetime of a component or circuit. A stressor, indicating a physical entity x will be denoted as ψ x. Stressors can be broadly classified into three main groups. First group contains the electrical stressors, parameters related to the electrical behavior of the circuit. Second group of stressors is the mechanical stressors, which are related to the mechanical environment of the component. Third group of parameters influencing the lifetime of components is related to the thermal environment of the component. Susceptibility of a component to a certain failure mechanism is defined as the probability function indicating the probability that a component will not remain operational for a certain time under a given combination of stressors. The susceptibility related to the failure mechanism y is usually defined as S y (t, ψ p, ψ q,ψ r ). Failure probabilities require detailed analysis of both stressors and susceptibility. Most components tend to have more than one failure mechanism, resulting in more than one failure probability. It can be shown that there is a strong correlation between the various failure mechanisms existing within a component. Figure. Stressor-Susceptibility interaction for single failure mechanism. Figure illustrates the stressor - susceptibility interaction for a single failure mechanism. It is clear that the main source of problem is the overlap between stressor and susceptibility density. The first step is to calculate the failure probability for this stressor distribution on a failure mechanism with a single, one variable, time independent catastrophic susceptibility model. This results in the following probability f fail, y, Ψ ( ) = f y ( Ψ) d Ψ To calculate the failure probability as a function of more complex susceptibility model, it will be necessary to calculate the failure probability of a part of the susceptibility model, for a certain stressor interval, characterized by its mean value ψ o and the corresponding susceptibility density function at that point S y (ψ o ). Considering the probability that a part has failed at a lower susceptibility level, result in the possibility to predict the failure probability per time interval of a certain failure at stressor level ψ 0 using the following relation ψ 0 f fail, y, Ψ ( ) = ( S Y ( ) f y ( Ψ) dψ) f fail, y ( ψ ) dψ 0 The last term is introduced to subtract failures caused by stressors at a lower susceptibility level. As, most often, failure probabilities are very small, in many cases the previous expression will simplify to f fail, y, ψ ( ψ0) = ( Sy( ) f y( Ψ) dψ) Ψ 0

2 ψ 0 when ( f fail, y ( ψ ) dψ ) =. 0 Since the susceptibility is defined as the probability that a component will not remain operational during a certain time, it is therefore possible to calculate the failure probability during a certain observation time t obs. f fail, y, ψ ( ψ 0 ) = t obs * * ( S y ( ) f y ( Ψ) dψ) The important requirement for using of the above equation is that the observation time t obs must be larger than the total elapsed sampling time to obtain an ergodic description of the associated stressors t total sample (t obs > t total sample ); f ail, y, ψ, (t, ψ) is assumed to be constant during the time interval t obs. From the previous equation it is possible to calculate the failure probability of a part per fail mechanism per time interval using the relation f fail, y = f fail, y, ψ ( t, ψ ) dψ 0 The above equation can now be used to calculate the part failure probability per time interval f = n fail f fail, i i = Using the previous assumptions it is also possible to calculate the probability that a component survives from time t to t+dt. The following expression can be used to calculate the failure probability for one single failure mechanism within one single device k Devices operational at time ( t + t) R( t... t+ t) = i = n Devices operational at ( t) i = = R( t) F( t) = R( t) tf ( t) As for large series of components, the physical structures of the individual components will be different for every component, the survival probability of such a series of components will also show individual differences. The stress on a component may vary with time due to circuit behavior and circuit use. The circuit behavior will differ amongst a series of circuits due to physical differences in the individual circuit components, the physical structure of a circuit, the use of a circuit and the environment (electrical, thermal, etc.) of the circuit. To summarize the variety of effects it is useful to describe stressors as stochastic signals with properties depending on the influencing factors mentioned above. These assumptions make it possible to derive the failure probability and reliability of a component using a Markov approach. For Markov approach the following requirements should be fulfilled Susceptibility of all failure mechanisms in a component is known and is constant in the time interval (t, t+ t). All stressors ψ a (t), ψ b (t), are known as stochastic signals for the time interval (t, t+ t). The failure probability (or reliability) is known at a certain (initial) time t. Using these properties it is possible to calculate the reliability and failure probability for components, derived from internal failure mechanisms for time t+ t. For this purpose the following relationships are used P(t+ t) = P(t) ( t) where P(t) is the state probability vector of a component. This state probability vector is defined as n P j (t) = j = P (t) = P operational (t) = R(t) P 2.n (t) = P fail, 2 n (t) = F fail, 2. n (t) P x y = P (y (t+ t) x (t) ) = f y (t) t P x (t) It is possible to replicate this calculation process for a whole batch of circuits. In this case for every circuit the individual stressor/ susceptibility interaction is calculated thus simulating batch behavior. Using this method it is possible to derive the failure probability for many parts in many practical situations, also in cases where considerable differences (in stressors and susceptibility) exist within a batch. 3. Modeling Stressor Sets and Susceptibility There are two different categories of failure mechanisms applicable to electronic components [3][0][][2]]. First, the failure mechanisms that are related to the electrical stress in a circuit. Second there are failure mechanisms related to the intrinsic aspects of a component. [4][5] There are two possible ways to obtain stressor sets for practical circuits. The First possibility involves usage of computer simulation models to derive all circuit signals using one single simulation. Second possibility is to derive stressor sets from practical measurements. In those cases where sufficient systems are available it is possible to do a statistical evaluation of the individual stressor functions existing in individual systems. As the stressor sets are dependent on the conditions of use and the operation modes of a system it is important that the measured stressor is based on all the possible

3 operation modes of a circuit and all the possible transitions between the various operation modes. This can become a quite tedious job as the entire operation is to be repeated for a number of systems to obtain an accurate statistical mean stressor model. Accurate description of a stressor set needs a sampling frequency of at least twice the highest frequency in the stressor frequency spectrum. Accurate description of a stressor set will require a number of samples sufficient to cover all the different states of the system. As a signal has often more than one quasi-stationary states, each characterized by their stressor set, it is possible to derive the overall stressor set function from the individual state stressor sets using: f n Ti str, y( x) = Σ f i = str, y, i( x) T total f str,y (x) is the stressor probability density function of quasistationary state i. T i / T total is the fraction of time that the stressor is in quasi-stationary state i. 4. Monte Carlo Analysis for Deriving Stressor Sets In a Monte Carlo analysis, a logical model of the system being analyzed is repeatedly evaluated, each run using different values of the distributed parameters []. The selection of parameter values is made randomly, but with probabilities governed by the relevant distribution functions. Statistical exploration covers the tolerance space by means of the generation of sets of random parameters within this tolerance space. Each set of random parameters represents one circuit. Multiple circuit simulations, each with a new set of random parameters, explore the tolerance space. Statistically the distribution of all random selections of one parameter represents the parameter distribution. Figure 2. Monte Carlo Analysis Although the number of simulations required for MCA is quite large, this analysis method is useful, especially because the number of parameters in the failure prediction of circuits is often too large to allow the use of other techniques. Figure 2 illustrates the MCA. With MCA it is possible to simulate the behavior of a large batch of circuits and derive stressor sets. The next phase will be the combination of the derived stressor sets with the component susceptibilities in order to decide whether a component will fail or not. As for the failure prediction, the most important aspect is to prevent failures; susceptibility will be expressed using the susceptibility limit. To distinguish circuits where failures are possible any circuit in the MCA causing to exceed a susceptibility limit are marked as fail. Circuits where no stressors exceed susceptibility limit are marked as pass. 5. Experimental Model - Stressor Sets and Susceptibility The analysis was carried out on a power circuit and the main cause of the failure of the circuit was a Schottky diode. Analysis shows that the main failure mechanisms are leakage current and excess crystal temperature. Using the procedure described earlier, it was possible to derive a complete individual stressor set for the failure mechanism of this diode. Figure 3 shows the joint stressor susceptibility interaction model in terms of voltage and current. The susceptibility limit for leakage current is set at -.5A. Figure 3. Stressor - susceptibility interaction model 6. Neuro Fuzzy Systems A Fuzzy Inference System (FIS) [9] can utilize human expertise by storing its essential components in rule base and database, and perform fuzzy reasoning to infer the overall output value. The derivation of if-then rules and corresponding membership functions depends heavily on the a priori knowledge about the system under consideration. However there is no systematic way to transform experiences of knowledge of human experts to the knowledge base of a FIS. There is also a need for adaptability or some learning algorithms to produce outputs within the required error rate. On the other hand, ANN learning mechanism does not rely on human expertise. Due to the homogenous structure of ANN, it is hard to extract structured knowledge from either the weights or the configuration of the ANN. The weights of the ANN represent the coefficients of the hyper-plane that partition the input space into two regions with different output values. If we can visualize this hyper-plane structure from the training data then the subsequent learning procedures in an ANN can be reduced. However, in reality, the a priori knowledge is usually obtained from human experts and it is most appropriate to express the knowledge as a set of fuzzy if-then rules and it is not possible to encode into an ANN. Table 3 summarizes the comparison of FIS and ANN. Black box Table. Complementary features of ANN and FIS ANN Learning from scratch Interpretable FIS Making use of linguistic knowledge To a large extent, the drawbacks pertaining to these two approaches seem complementary. Therefore it is natural to consider building an integrated system combining the concepts of FIS and ANN modeling. A common way to apply a learning algorithm to a FIS is to represent it in a special ANN like architecture. However the conventional ANN learning algorithms (gradient descent) cannot be applied directly to such a system as the functions used in the inference process are usually non

4 differentiable. This problem can be tackled by using differentiable functions in the inference system or by not using the standard neural learning algorithm. In our simulation, we used Adaptive Network Based Fuzzy Inference System (ANFIS) [7] as we found this to be the best neuro-fuzzy system available for function approximation problems [3]. ANFIS implements a Takagi Sugeno Kang (TSK) fuzzy inference system [8] in which the conclusion of a fuzzy rule is constituted by a weighted linear combination of the crisp inputs rather than a fuzzy set. For a first order TSK model, a common rule set with two fuzzy ifthen rules is represented as follows: Rule : If x is A and y is B, then f = p x + q y + r Rule 2: If x is A 2 and y is B 2, then f 2 = p 2 x + q 2 y + r 2 where x and y are linguistic variables and A, A 2,, B,, B 2 are corresponding fuzzy sets and p, q, r and p 2, q 2, r 2, are linear parameters. Figure 4. TSK type fuzzy inference system Figure 4 illustrates the TSK fuzzy inference system when 2 membership functions each are assigned to the 2 inputs (x and y). TSK fuzzy controller usually needs a smaller number of rules, because their output is already a linear function of the inputs rather than a constant fuzzy set. Figure 5. Architecture of the ANFIS Figure 5 depicts the 5 layered architecture of ANFIS and the functionality of each layer is as follows: Layer- Every node in this layer has a node function Oi = µ Ai ( x), for i =, 2 or Oi = µ ( y), for i=3,4,. B i 2 O i is the membership grade of a fuzzy set A ( = A, A 2, B or B 2 ) and it specifies the degree to which the given input x (or y) satisfies the quantifier A. Usually the node function can be any parameterized function.. A gaussian membership function is specified by two parameters c (membership function center) and σ.(membership function width) ( ) 2 x c σ guassian (x, c, σ) = e 2 Parameters in this layer are referred to premise parameters. Layer-2 Every node in this layer multiplies the incoming signals and sends the product out. Each node output represents the firing strength of a rule. 2 Oi = wi = µ A i ( x) µ B i ( y), i =,2.... In general any T-norm operators that perform fuzzy AND can be used as the node function in this layer. Layer-3 Every i-th node in this layer calculates the ratio of the i-th rule s firing strength to the sum of all rules firing strength. 3 w O = w = i i i, i =,2.... w + w2 Layer-4 Every node i in this layer is with a node function O 4 = wi fi = wi ( pix + qi y + ri ), where wi is the output of layer3, and { p i, q i, r i } is the parameter set. Parameters in this layer will be referred to as consequent parameters. Layer-5 The single node in this layer computes the overall output as the summation of all incoming signals: = = = iwi fi O 5 Overall output wi fi. i iwi ANFIS makes use of a mixture of back propagation to learn the premise parameters and least mean square estimation to determine the consequent parameters. A step in the learning procedure has two parts: In the first part the input patterns are propagated, and the optimal conclusion parameters are estimated by an iterative least mean square procedure, while the antecedent parameters (membership functions) are assumed to be fixed for the current cycle through the training set. In the second part the patterns are propagated again, and in this epoch, back propagation is used to modify the antecedent parameters, while the conclusion parameters remain fixed. This procedure is then iterated. 7. Artificial Neural Network Model Artificial Neural Networks (ANNs) [6] have been developed as generalizations of mathematical models of biological nervous systems. A neural network is characterized by the network architecture, the connection strength between pairs of neurons (weights), node properties, and updating rules. The updating or learning rules control weights and/or states of the processing elements (neurons). Normally, an objective function is defined that represents the complete status of the network, and its set of minima corresponds to different stable states of the network. It can learn by adapting its weights to changes in the surrounding environment, can handle imprecise information, and generalise from known tasks to unknown ones. Each neuron is an elementary processor with primitive operations, like summing the weighted inputs coming to it and then amplifying or thresholding the sum. There are three broad paradigms of learning: supervised, unsupervised (or self-organized) and reinforcement (a special case of supervised learning). We used a feedforward neural network using supervised backpropagation learning.

5 An important factor in the design of an ANN is that the designer has to specify the number of neurons, their distribution over several layers and interconnection between them. Besides a good choice of learning parameters and initial weights are required for training success and speed of the ANN. The following guideline will be of help as a step-by-step methodology for training a network. Choosing the number of neurons The number of hidden neurons affects how well the network is able to separate the data. A large number of hidden neurons will ensure the correct learning and the network is able to correctly predict the data it has been trained on, but its performance on new data, its ability to generalise, is compromised. With too few a hidden neurons, the network may be unable to learn the relationships amongst the data and the error will fail to fall below an acceptable level. Thus, selection of the number of hidden neurons is a crucial decision. Often a trial and error approach is taken starting with a modest number of hidden neurons and gradually increasing the number if the network fails to reduce its error. Choosing the initial weights The learning algorithm uses a steepest descent technique, which rolls straight downhill in weight space until the first valley is reached. This valley may not correspond to a zero error for the resulting network. This makes the choice of initial starting point in the multidimensional weight space critical. However, there are no recommended rules for this selection except trying several different starting weight values to see if the network results are improved. Choosing the learning rate Learning rate effectively controls the size of the step that is taken in multidimensional weight space when each weight is modified. If the selected learning rate is too large then the local minimum may be overstepped constantly, resulting in oscillations and slow convergence to the lower error state. If the learning rate is too low, the number of iterations required may be too large, resulting in slow performance. If the network fails to reduce its error, then reducing the learning rate may help improve convergence to a local minimum of the error surface. Figure 6. Test Result - Predicted Temperature using ANFIS Figure 7. Test Result - Predicted Temperature using ANN Figure 8. Test Result - Leakage current prediction using ANFIS Choosing the activation function The learning signal is a function of the error multiplied by the df gradient of the activation function. The larger the value of d(net) the gradient, the more weight the learning will receive. For training, the activation function must be monotonically increasing from left to right, differentiable and smooth. 8. Experimentation Results The experimental system consists of two stages: Network training and performance evaluation. From our circuit analysis, we modeled the stressor- susceptibility model as shown in Figure 3. The failure of the system occurs either due to rising crystal temperature or high leakage current. We attempted to predict the component temperature and leakage current for a given voltage and current. Data was generated by simulating circuit failure. All the training data were standardized before training. The input parameters considered are the Voltage (V) and Current (I). Predicted outputs are the junction temperature and leakage current. Figure 9. Test Result - Leakage current prediction using ANN ANFIS training In the ANFIS network, we used 4 Gaussian membership functions for each input parameter variable. Sixteen rules were learned based on the training data. ANFIS was trained using 80% of the data and the remaining 20% data was used for testing and validation. The training was terminated after 20 epochs.

6 ANN training We used a feedforward neural network with 2 hidden layers in parallel, 2 input neurons corresponding to the input variables and 3 output neurons. Each of the input neurons was directly connected to each of the neurons in the hidden layers as well as to each of the neurons in the output layer. The network was trained using 80% of the data and the remaining 30% data was used for testing and validation. Initial weights, learning rate and momentum used were 0.3, 0. and 0., respectively. The training was terminated after 3500 epochs. Performance and results achieved Prediction of reactive power demand: Figures 6 and 7 illustrate the test results for junction temperature prediction using ANFIS and ANN. Similarly, Figures 8 and 9 depicts the test results for leakage current prediction using ANFIS and ANN. Table 2 summarizes the comparative performance of ANFIS and ANN. Table 2. ANFIS and ANN comparative performance ANFIS ANN Learning epochs Training time 0 Sec 58 Sec Training Error (RMSE) Junction Temperature Leakage Current Testing Error (RMSE) Junction Temperature Leakage Current For network training, ANN took more time than ANFIS to achieve the error stated in Table 2. Surprisingly, the starting error after the first epoch in ANFIS was much lower than the final error of ANN. From the experimental results, it is clear that ANFIS outperformed ANN in terms of performance time and achieving lower error on test data. 9. Conclusions In this paper we attempted to predict the failures of electronic circuits and systems using neuro-fuzzy systems and artificial neural networks. The proposed neuro-fuzzy model outperformed artificial neural network in terms of performance time and prediction error. Compared to neural network, an important advantage of neuro-fuzzy system is its reasoning ability of any particular state. Moreover, the considered connectionist models are very robust, capable of handling the noisy and approximate data that are typical in circuits, and therefore should be more reliable during worst conditions. The problem modeling using stressor susceptibility interaction method can be widely applied to a wide range of electronic circuits or systems. However, it requires intense knowledge on the circuit behavior to model the various dependent input parameters to predict the results accurately. References [] Abraham A, Reliability Modeling and Optimization of Electronic circuits by Stressor Susceptibility Analysis, Master of Science (Computer Control & Automation) Thesis, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, July 998. [2] Abraham A and Nath B, Failure Prediction of Critical Electronic Systems in Power Plants Using Artificial Neural Networks, First International Power and Energy Conference, INTPEC99, Australia, November 999. [3] Anthony Chan H, A formulation of environmental stress testing and screening, Proceedings of the Annual Reliability and Maintainability Symposium (IEEE Reliability Society), pp 99-04, January 994. [4] Arsenault J E and J A Roberts, Reliability and maintainability of electronic systems, Computer Science press, Maryland, 980. [5] Brombacher A C, Reliability by Design, Wiley, 995 [6] Zurada J M, Introduction to Artificial Neural Systems, PWS Pub Co, 992. [7] Jang J S R, ANFIS: Adaptive Network based Fuzzy Inference Systems, IEEE Transactions Systems, Man & Cybernetics, 99. [8] Sugeno M, Industrial Applications of Fuzzy Control, Elsevier Science Pub Co., 985. [9] Cherkassky V, Fuzzy Inference Systems: A Critical Review, Computational Intelligence: Soft Computing and Fuzzy- Neuro Integration with Applications, Kayak O, Zadeh LA et al (Eds.), Springer, pp.77-97, 998. [0] Jenson F, Electronic Component Reliability, Wiley, 995. [] Klion J, Practical Electronic Reliability Engineering, VNR Edition, 992. [2] Fuqua N B, Reliability Engineering for Electronic Design, Marcel Dekker, 987. [3] Abraham A and Nath B, Optimal Design of Neuro-Fuzzy Systems for Intelligent Control, The Sixth International Conference on Control, Automation, Robotics and Vision, (ICARCV 2000), December 2000,Singapore. [4] Zadeh LA, Roles of Soft Computing and Fuzzy Logic in the Conception, Design and Deployment of Information/Intelligent Systems, Computational Intelligence: Soft Computing and Fuzzy-Neuro Integration with Applications, O Kaynak, LA Zadeh, B Turksen, IJ Rudas (Eds.), pp-9, 998.

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH ISSN 1726-4529 Int j simul model 9 (2010) 2, 74-85 Original scientific paper MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH Roy, S. S. Department of Mechanical Engineering,

More information

ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS

ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS RBFN and TS systems Equivalent if the following hold: Both RBFN and TS use same aggregation method for output (weighted sum or weighted average) Number of basis functions

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Development of Stochastic Artificial Neural Networks for Hydrological Prediction Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental

More information

N. Sarikaya Department of Aircraft Electrical and Electronics Civil Aviation School Erciyes University Kayseri 38039, Turkey

N. Sarikaya Department of Aircraft Electrical and Electronics Civil Aviation School Erciyes University Kayseri 38039, Turkey Progress In Electromagnetics Research B, Vol. 6, 225 237, 2008 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM FOR THE COMPUTATION OF THE CHARACTERISTIC IMPEDANCE AND THE EFFECTIVE PERMITTIVITY OF THE MICRO-COPLANAR

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria

A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria Ajith Abraham and Baikunth Nath School of Computing and Information Technology Monash University (Gippsland Campus), Churchill, Australia

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR 4.1 Introduction Fuzzy Logic control is based on fuzzy set theory. A fuzzy set is a set having uncertain and imprecise nature of abstract thoughts, concepts

More information

CHAPTER 4 BASICS OF ULTRASONIC MEASUREMENT AND ANFIS MODELLING

CHAPTER 4 BASICS OF ULTRASONIC MEASUREMENT AND ANFIS MODELLING 37 CHAPTER 4 BASICS OF ULTRASONIC MEASUREMENT AND ANFIS MODELLING 4.1 BASICS OF ULTRASONIC MEASUREMENT All sound waves, whether audible or ultrasonic, are mechanical vibrations involving movement in the

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

Study of a neural network-based system for stability augmentation of an airplane

Study of a neural network-based system for stability augmentation of an airplane Study of a neural network-based system for stability augmentation of an airplane Author: Roger Isanta Navarro Annex 1 Introduction to Neural Networks and Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Water Quality Management using a Fuzzy Inference System

Water Quality Management using a Fuzzy Inference System Water Quality Management using a Fuzzy Inference System Kumaraswamy Ponnambalam and Seyed Jamshid Mousavi A fuzzy inference system (FIS) is presented for the optimal operation of a reservoir system with

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

A neuro-fuzzy system for portfolio evaluation

A neuro-fuzzy system for portfolio evaluation A neuro-fuzzy system for portfolio evaluation Christer Carlsson IAMSR, Åbo Akademi University, DataCity A 320, SF-20520 Åbo, Finland e-mail: ccarlsso@ra.abo.fi Robert Fullér Comp. Sci. Dept., Eötvös Loránd

More information

Fuzzy Associative Conjuncted Maps Network Hanlin Goh, Joo-Hwee Lim, Member, IEEE, and Chai Quek, Member, IEEE

Fuzzy Associative Conjuncted Maps Network Hanlin Goh, Joo-Hwee Lim, Member, IEEE, and Chai Quek, Member, IEEE 1302 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL 20, NO 8, AUGUST 2009 Fuzzy Associative Conjuncted Maps Network Hanlin Goh, Joo-Hwee Lim, Member, IEEE, and Chai Quek, Member, IEEE Abstract The fuzzy associative

More information

Modelling and Optimization of Primary Steam Reformer System (Case Study : the Primary Reformer PT Petrokimia Gresik Indonesia)

Modelling and Optimization of Primary Steam Reformer System (Case Study : the Primary Reformer PT Petrokimia Gresik Indonesia) Modelling and Optimization of Primary Steam Reformer System (Case Study : the Primary Reformer PT Petrokimia Gresik Indonesia) Abstract S.D. Nugrahani, Y. Y. Nazaruddin, E. Ekawati, and S. Nugroho Research

More information

Robust multi objective H2/H Control of nonlinear uncertain systems using multiple linear model and ANFIS

Robust multi objective H2/H Control of nonlinear uncertain systems using multiple linear model and ANFIS Robust multi objective H2/H Control of nonlinear uncertain systems using multiple linear model and ANFIS Vahid Azimi, Member, IEEE, Peyman Akhlaghi, and Mohammad Hossein Kazemi Abstract This paper considers

More information

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge

More information

Artificial Neural Network Method of Rock Mass Blastability Classification

Artificial Neural Network Method of Rock Mass Blastability Classification Artificial Neural Network Method of Rock Mass Blastability Classification Jiang Han, Xu Weiya, Xie Shouyi Research Institute of Geotechnical Engineering, Hohai University, Nanjing, Jiangshu, P.R.China

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

FAULT DETECTION AND ISOLATION FOR ROBOT MANIPULATORS USING ANFIS AND WAVELET

FAULT DETECTION AND ISOLATION FOR ROBOT MANIPULATORS USING ANFIS AND WAVELET FAULT DETECTION AND ISOLATION FOR ROBOT MANIPULATORS USING ANFIS AND WAVELET Tolga YÜKSEL, Abdullah SEZGİN, Department of Electrical and Electronics Engineering Ondokuz Mayıs University Kurupelit Campus-Samsun

More information

Algorithms for Increasing of the Effectiveness of the Making Decisions by Intelligent Fuzzy Systems

Algorithms for Increasing of the Effectiveness of the Making Decisions by Intelligent Fuzzy Systems Journal of Electrical Engineering 3 (205) 30-35 doi: 07265/2328-2223/2050005 D DAVID PUBLISHING Algorithms for Increasing of the Effectiveness of the Making Decisions by Intelligent Fuzzy Systems Olga

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence Artificial Intelligence (AI) Artificial Intelligence AI is an attempt to reproduce intelligent reasoning using machines * * H. M. Cartwright, Applications of Artificial Intelligence in Chemistry, 1993,

More information

Estimation of Precipitable Water Vapor Using an Adaptive Neuro-fuzzy Inference System Technique

Estimation of Precipitable Water Vapor Using an Adaptive Neuro-fuzzy Inference System Technique Estimation of Precipitable Water Vapor Using an Adaptive Neuro-fuzzy Inference System Technique Wayan Suparta * and Kemal Maulana Alhasa Institute of Space Science (ANGKASA), Universiti Kebangsaan Malaysia,

More information

Data Quality in Hybrid Neuro-Fuzzy based Soft-Sensor Models: An Experimental Study

Data Quality in Hybrid Neuro-Fuzzy based Soft-Sensor Models: An Experimental Study IAENG International Journal of Computer Science, 37:1, IJCS_37_1_8 Data Quality in Hybrid Neuro-Fuzzy based Soft-Sensor Models: An Experimental Study S. Jassar, Student Member, IEEE, Z. Liao, Member, ASHRAE,

More information

ANFIS Modelling of a Twin Rotor System

ANFIS Modelling of a Twin Rotor System ANFIS Modelling of a Twin Rotor System S. F. Toha*, M. O. Tokhi and Z. Hussain Department of Automatic Control and Systems Engineering University of Sheffield, United Kingdom * (e-mail: cop6sft@sheffield.ac.uk)

More information

A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games

A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games International Journal of Fuzzy Systems manuscript (will be inserted by the editor) A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games Mostafa D Awheda Howard M Schwartz Received:

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING *

A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING * No.2, Vol.1, Winter 2012 2012 Published by JSES. A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL * Faruk ALPASLAN a, Ozge CAGCAG b Abstract Fuzzy time series forecasting methods

More information

8. Lecture Neural Networks

8. Lecture Neural Networks Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

In-Flight Engine Diagnostics and Prognostics Using A Stochastic-Neuro-Fuzzy Inference System

In-Flight Engine Diagnostics and Prognostics Using A Stochastic-Neuro-Fuzzy Inference System In-Flight Engine Diagnostics and Prognostics Using A Stochastic-Neuro-Fuzzy Inference System Dan M. Ghiocel & Joshua Altmann STI Technologies, Rochester, New York, USA Keywords: reliability, stochastic

More information

Rule-Based Fuzzy Model

Rule-Based Fuzzy Model In rule-based fuzzy systems, the relationships between variables are represented by means of fuzzy if then rules of the following general form: Ifantecedent proposition then consequent proposition The

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

A TSK-Type Quantum Neural Fuzzy Network for Temperature Control

A TSK-Type Quantum Neural Fuzzy Network for Temperature Control International Mathematical Forum, 1, 2006, no. 18, 853-866 A TSK-Type Quantum Neural Fuzzy Network for Temperature Control Cheng-Jian Lin 1 Dept. of Computer Science and Information Engineering Chaoyang

More information

COMP-4360 Machine Learning Neural Networks

COMP-4360 Machine Learning Neural Networks COMP-4360 Machine Learning Neural Networks Jacky Baltes Autonomous Agents Lab University of Manitoba Winnipeg, Canada R3T 2N2 Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky http://aalab.cs.umanitoba.ca

More information

Neural Networks for Protein Structure Prediction Brown, JMB CS 466 Saurabh Sinha

Neural Networks for Protein Structure Prediction Brown, JMB CS 466 Saurabh Sinha Neural Networks for Protein Structure Prediction Brown, JMB 1999 CS 466 Saurabh Sinha Outline Goal is to predict secondary structure of a protein from its sequence Artificial Neural Network used for this

More information

Robust Pareto Design of GMDH-type Neural Networks for Systems with Probabilistic Uncertainties

Robust Pareto Design of GMDH-type Neural Networks for Systems with Probabilistic Uncertainties . Hybrid GMDH-type algorithms and neural networks Robust Pareto Design of GMDH-type eural etworks for Systems with Probabilistic Uncertainties. ariman-zadeh, F. Kalantary, A. Jamali, F. Ebrahimi Department

More information

CHAPTER V TYPE 2 FUZZY LOGIC CONTROLLERS

CHAPTER V TYPE 2 FUZZY LOGIC CONTROLLERS CHAPTER V TYPE 2 FUZZY LOGIC CONTROLLERS In the last chapter fuzzy logic controller and ABC based fuzzy controller are implemented for nonlinear model of Inverted Pendulum. Fuzzy logic deals with imprecision,

More information

Deep Learning & Artificial Intelligence WS 2018/2019

Deep Learning & Artificial Intelligence WS 2018/2019 Deep Learning & Artificial Intelligence WS 2018/2019 Linear Regression Model Model Error Function: Squared Error Has no special meaning except it makes gradients look nicer Prediction Ground truth / target

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

A New Model Reference Adaptive Formulation to Estimate Stator Resistance in Field Oriented Induction Motor Drive

A New Model Reference Adaptive Formulation to Estimate Stator Resistance in Field Oriented Induction Motor Drive A New Model Reference Adaptive Formulation to Estimate Stator Resistance in Field Oriented Induction Motor Drive Saptarshi Basak 1, Chandan Chakraborty 1, Senior Member IEEE and Yoichi Hori 2, Fellow IEEE

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

NEURO-FUZZY SYSTEM BASED ON GENETIC ALGORITHM FOR ISOTHERMAL CVI PROCESS FOR CARBON/CARBON COMPOSITES

NEURO-FUZZY SYSTEM BASED ON GENETIC ALGORITHM FOR ISOTHERMAL CVI PROCESS FOR CARBON/CARBON COMPOSITES NEURO-FUZZY SYSTEM BASED ON GENETIC ALGORITHM FOR ISOTHERMAL CVI PROCESS FOR CARBON/CARBON COMPOSITES Zhengbin Gu, Hejun Li, Hui Xue, Aijun Li, Kezhi Li College of Materials Science and Engineering, Northwestern

More information

Identification and Control of Nonlinear Systems using Soft Computing Techniques

Identification and Control of Nonlinear Systems using Soft Computing Techniques International Journal of Modeling and Optimization, Vol. 1, No. 1, April 011 Identification and Control of Nonlinear Systems using Soft Computing Techniques Wahida Banu.R.S.D, ShakilaBanu.A and Manoj.D

More information

Neural Networks DWML, /25

Neural Networks DWML, /25 DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference

More information

Enhancing a Model-Free Adaptive Controller through Evolutionary Computation

Enhancing a Model-Free Adaptive Controller through Evolutionary Computation Enhancing a Model-Free Adaptive Controller through Evolutionary Computation Anthony Clark, Philip McKinley, and Xiaobo Tan Michigan State University, East Lansing, USA Aquatic Robots Practical uses autonomous

More information

Computational Intelligence Lecture 20:Neuro-Fuzzy Systems

Computational Intelligence Lecture 20:Neuro-Fuzzy Systems Computational Intelligence Lecture 20:Neuro-Fuzzy Systems Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence

More information

SELF-EVOLVING TAKAGI- SUGENO-KANG FUZZY NEURAL NETWORK

SELF-EVOLVING TAKAGI- SUGENO-KANG FUZZY NEURAL NETWORK SELF-EVOLVING TAKAGI- SUGENO-KANG FUZZY NEURAL NETWORK Nguyen Ngoc Nam Department of Computer Engineering Nanyang Technological University A thesis submitted to the Nanyang Technological University in

More information

A Neuro-Fuzzy Scheme for Integrated Input Fuzzy Set Selection and Optimal Fuzzy Rule Generation for Classification

A Neuro-Fuzzy Scheme for Integrated Input Fuzzy Set Selection and Optimal Fuzzy Rule Generation for Classification A Neuro-Fuzzy Scheme for Integrated Input Fuzzy Set Selection and Optimal Fuzzy Rule Generation for Classification Santanu Sen 1 and Tandra Pal 2 1 Tejas Networks India Ltd., Bangalore - 560078, India

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Deep unsupervised learning

Deep unsupervised learning Deep unsupervised learning Advanced data-mining Yongdai Kim Department of Statistics, Seoul National University, South Korea Unsupervised learning In machine learning, there are 3 kinds of learning paradigm.

More information

Supervised Learning in Neural Networks

Supervised Learning in Neural Networks The Norwegian University of Science and Technology (NTNU Trondheim, Norway keithd@idi.ntnu.no March 7, 2011 Supervised Learning Constant feedback from an instructor, indicating not only right/wrong, but

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:

More information

1. Introduction. 2. Artificial Neural Networks and Fuzzy Time Series

1. Introduction. 2. Artificial Neural Networks and Fuzzy Time Series 382 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.9, September 2008 A Comparative Study of Neural-Network & Fuzzy Time Series Forecasting Techniques Case Study: Wheat

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Title An evolving interval type-2 fuzzy inference system for renewable energy prediction intervals( Phd Thesis

More information

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Mehmet Önder Efe Electrical and Electronics Engineering Boðaziçi University, Bebek 80815, Istanbul,

More information

CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning

CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning Lei Lei Ruoxuan Xiong December 16, 2017 1 Introduction Deep Neural Network

More information

Weight Initialization Methods for Multilayer Feedforward. 1

Weight Initialization Methods for Multilayer Feedforward. 1 Weight Initialization Methods for Multilayer Feedforward. 1 Mercedes Fernández-Redondo - Carlos Hernández-Espinosa. Universidad Jaume I, Campus de Riu Sec, Edificio TI, Departamento de Informática, 12080

More information

Nonlinear System Identification Based on a Novel Adaptive Fuzzy Wavelet Neural Network

Nonlinear System Identification Based on a Novel Adaptive Fuzzy Wavelet Neural Network Nonlinear System Identification Based on a Novel Adaptive Fuzzy Wavelet Neural Network Maryam Salimifard*, Ali Akbar Safavi** *School of Electrical Engineering, Amirkabir University of Technology, Tehran,

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

On the convergence speed of artificial neural networks in the solving of linear systems

On the convergence speed of artificial neural networks in the solving of linear systems Available online at http://ijimsrbiauacir/ Int J Industrial Mathematics (ISSN 8-56) Vol 7, No, 5 Article ID IJIM-479, 9 pages Research Article On the convergence speed of artificial neural networks in

More information

A NEURAL FUZZY APPROACH TO MODELING THE THERMAL BEHAVIOR OF POWER TRANSFORMERS

A NEURAL FUZZY APPROACH TO MODELING THE THERMAL BEHAVIOR OF POWER TRANSFORMERS A NEURAL FUZZY APPROACH TO MODELING THE THERMAL BEHAVIOR OF POWER TRANSFORMERS Huy Huynh Nguyen A thesis submitted for the degree of Master of Engineering School of Electrical Engineering Faculty of Health,

More information

Fuzzy Systems. Introduction

Fuzzy Systems. Introduction Fuzzy Systems Introduction Prof. Dr. Rudolf Kruse Christoph Doell {kruse,doell}@iws.cs.uni-magdeburg.de Otto-von-Guericke University of Magdeburg Faculty of Computer Science Department of Knowledge Processing

More information

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller 2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that

More information

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN ,

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN , Relevance determination in learning vector quantization Thorsten Bojer, Barbara Hammer, Daniel Schunk, and Katharina Tluk von Toschanowitz University of Osnabrück, Department of Mathematics/ Computer Science,

More information

FERROMAGNETIC HYSTERESIS MODELLING WITH ADAPTIVE NEURO FUZZY INFERENCE SYSTEM

FERROMAGNETIC HYSTERESIS MODELLING WITH ADAPTIVE NEURO FUZZY INFERENCE SYSTEM Journal of ELECTRICAL ENGINEERING, VOL. 59, NO. 5, 2008, 260 265 FERROMAGNETIC HYSTERESIS MODELLING WITH ADAPTIVE NEURO FUZZY INFERENCE SYSTEM Mourad Mordjaoui Mabrouk Chabane Bouzid Boudjema Hysteresis

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Learning Tetris. 1 Tetris. February 3, 2009

Learning Tetris. 1 Tetris. February 3, 2009 Learning Tetris Matt Zucker Andrew Maas February 3, 2009 1 Tetris The Tetris game has been used as a benchmark for Machine Learning tasks because its large state space (over 2 200 cell configurations are

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks Sections 18.6 and 18.7 Analysis of Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline Univariate regression

More information

Using Fuzzy Logic Methods for Carbon Dioxide Control in Carbonated Beverages

Using Fuzzy Logic Methods for Carbon Dioxide Control in Carbonated Beverages International Journal of Electrical & Computer Sciences IJECS-IJENS Vol: 11 No: 03 98 Using Fuzzy Logic Methods for Carbon Dioxide Control in Carbonated Beverages İman Askerbeyli 1 and Juneed S.Abduljabar

More information

Linear discriminant functions

Linear discriminant functions Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Parameter estimation by anfis where dependent variable has outlier

Parameter estimation by anfis where dependent variable has outlier Hacettepe Journal of Mathematics and Statistics Volume 43 (2) (2014), 309 322 Parameter estimation by anfis where dependent variable has outlier Türkan Erbay Dalkılıç a, Kamile Şanlı Kula b, and Ayşen

More information

Modeling of Hysteresis Effect of SMA using Neuro Fuzzy Inference System

Modeling of Hysteresis Effect of SMA using Neuro Fuzzy Inference System 54th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference April 8-11, 2013, Boston, Massachusetts AIAA 2013-1918 Modeling of Hysteresis Effect of SMA using Neuro Fuzzy Inference

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation

Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation Cognitive Science 30 (2006) 725 731 Copyright 2006 Cognitive Science Society, Inc. All rights reserved. Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation Geoffrey Hinton,

More information

A SELF-TUNING KALMAN FILTER FOR AUTONOMOUS SPACECRAFT NAVIGATION

A SELF-TUNING KALMAN FILTER FOR AUTONOMOUS SPACECRAFT NAVIGATION A SELF-TUNING KALMAN FILTER FOR AUTONOMOUS SPACECRAFT NAVIGATION Son H. Truong National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) Greenbelt, Maryland, USA 2771 E-mail:

More information

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6 Administration Registration Hw3 is out Due on Thursday 10/6 Questions Lecture Captioning (Extra-Credit) Look at Piazza for details Scribing lectures With pay; come talk to me/send email. 1 Projects Projects

More information

Automatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries

Automatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries Automatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries Anonymous Author(s) Affiliation Address email Abstract 1 2 3 4 5 6 7 8 9 10 11 12 Probabilistic

More information

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Abstract Most neural networks

More information

Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition

Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition J. Uglov, V. Schetinin, C. Maple Computing and Information System Department, University of Bedfordshire, Luton,

More information