STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAMIC SYSTEMS MODELING
|
|
- Stephany Thornton
- 6 years ago
- Views:
Transcription
1 STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAIC SYSTES ODELING J. CODINA, R. VILLÀ and J.. FUERTES UPC-Facultat d Informàtica de Barcelona, Department of Automatic Control and Computer Engineeering, Pau Gargallo 5, Barcelona. Catalonia Abstract. The use of artificial neural networks (ANN) for nonlinear system modeling is a field where still there is much theoretical work to be done. A structured ANN which obtains neural models of nonlinear systems is presented. Those neural models are Fourier-series based. To check the goodness of the method, conventional difference equations are re-modeled via ANN and their respective input/outputs compared. Also their Fourier series expansion are compared. The Fourier coefficients being optimal for series truncation, this allows to estimate the goodness of the models obtained. Preliminary tests give encouraging results Key Words. Neural nets; Nonlinear systems; odeling; Control systems; State-space methods; Fourier analysis. 1. INTRODUCTION Artificial neural networks (ANN) can be used, and are used to model the dynamics of a system or plant. (Yamada and Yabuta, 1993). Usually the ANN and the plant are feed with the same input signal (or input signals for multi-input systems) for training the neural network to model the plant behavior. The network output is then compared with the output from the plant and the error is used to update the weights of the synapses (see Fig. 1). this configuration an input-output model of the plant is obtained, instead of a space model. The advantage of this model is that a series-parallel model can be used while training the network. In a series-parallel model (Fig. 2) the previous outputs from the plant are used as to the ANN instead of the outputs from the proper neural network. Once the model is obtained then the can be taken from the previous outputs of the ANN obtaining and autonomous system. u(k) Plant Neural Network y(k) y(k) ^ + - error u(k) Plant Neural Network y(k) series parallel y(k) ^ + - error Fig. 1. ANN training structure for system modeling A simple feed-forward neural network can be used, (Narendra and Parthasarathy 1990; Wu et al. 1992), if the ANN is feed with the current input, the previous and the previous outputs. Using Fig. 2. Series-parallel training model Using the series-parallel model configuration the obtained model has the following equation:
2 y( k ) = F ( u( k ), u( k 1), u( k 2)..., y( k 1), y( k 2)...) (1) Different number of layers, activation functions and adaptation algorithms can be used. Qin et al. (1992) compares some configurations. To obtain a space model of a plant, a feedforward ANN can also be used, assuming that the system is accessible at any moment, (Nguyen and Widrow 1990; Anderson 1989). The resulting model is then a space representation of the system dynamics: y( k ), x( k + 1 ) = F ( u( k ), x( k )) (2) In both cases the learning method is static, and usually the backpropagation algorithm is used. A dynamic discrete time ANN can also be used. This is the case of Jordan (1986) or Elman (1988) models where the outputs from the network or from the hidden layer respectively are feed back as new to the network (Fig. 3). The ability of extracting useful information about the real system from the simulated neural network model, is of primary interest. Although the existence of a neural network structure that allows the extraction of a set of equations is useful for the study of the system properties, it doesn't relate equations to physical phenomena. A neural network with a structure inspired on the space representation of a system could allow the application of modern control theory to SISO and IO systems. Beginning with linear discrete time systems we first obtained a neural network, (Codina et al. 1992) able to learn the matrices of the system representation from the input-output properties (Fig 4). This model is a neural network with three layers, the input, the and the output layers. The and the output layers are connected to the input layer, composed by the actual and of the system. new outputs output output hidden layer hidden layer a input Fig. 3. a) Jordan network. b) Elaman network b input The Jordan network is very similar to the serialparallel model but it needs as many outputs as the system order. The Elman structure has some problems to learn systems where there is no direct connection between the input and the output, as happens when there are physical delays. After the ANN training, we obtain a black box which acts as a system simulator. This neural network is often obtained as a first step in the design of neuronal controllers. There are difficulties in making a study of the properties of the plant from that model. The kind of activation functions, the number of layers, and the topology of the network are more a hinder than a help for this study. 2. STATE SPACE ODEL Fig. 4. Proposed linear structure. To train the network, the backpropagation-throughtime learning algorithm has been used (Werbos 1990), where the error is back-propagated from the output, through the present, to the previous. The weights updating is done in batch mode. That is, the weights are changed after a number N of input-output pairs have been processed. The number N is taken to be equal to the length of the input signal. With this methodology, and using linear neurons we can obtain from the synapses weights the matrices A, B, C and D of the space representation of the system: x( k + 1) = Ax( k ) + Bu( k) y( k ) = Cx( k) + Du( k ) (3) As the main interest of the application of neural networks in dynamic systems is their ability to learn and map nonlinear functions, we have expanded our previous model to deal with nonlinear systems. Expanding the linear model to nonlinear systems using sigmoids hinders further study of the obtained model. To solve this drawback we used a structured neural network based on Fourier series in order to approximate any nonlinear function in a bounded interval (Fig. 5).
3 A nonlinear discrete time system can be modeled by the following difference equations: x( k + 1) = F ( x( k ), u( k )) y( k) = G( x( k ), u( k) ) new sines outputs cosines (4) Once the linear part of the model has been approximated, the network is expanded to include the nonlinear terms. This expansion will cause only a small change to the linear coefficients. If the weights of the sine and cosine neurons are fixed then this training method can be compared with the direct calculus of the Fourier coefficients (Codina et al. 1994). In other words, the ANN is calculating the discrete Fourier transform of the nonlinear transfer function in a bounded interval. To improve the results, the number of points should be at least the number of Fourier coefficients we are calculating, to avoid multiple solutions, and should be within a regular sampling of the space. Fig. 5. Proposed nonlinear structure. The Fourier series expansion allows us to express F and G as a weighted sum of sinusoidal functions, in a bounded interval: N x%( k + 1) = A% Fn, m cos( nwnx %( k ) + mwmu ( k )) n = 0 m= + B% sin( nw x %( k ) + mw u ( k )) N Fn, m n m y%( k ) = A% Gn, m cos( nwnx ( k ) + mwmu ( k )) 0 m= (5) + B% sin( nw x ( k ) + mw u ( k )) Gn, m n m Such series have the advantage that they form an orthogonal base of functions and, in particular, any coefficient can be added without changing the previous ones. So the network can be scalable in the sense of adding new sinusoidal elements, but beginning with the previous learned structure and weights. This model has similarities with a functional-link net (Pao 1990), but neurons with fixed weights are used in order to allow the application of the backpropagation-through-time algorithm. If the weights of the sine and cosine neurons are fixed in order to be nω o then this training method can be compared with the direct calculus of the Fourier coefficients (Codina et al. 1994). In order to obtain the linear part of the system and to minimize the Gibbs phenomenon, we have included a linear part to the Fourier expansion, by using the c coefficients. This gives us a hybrid expansion: x%( k + 1) = c1x( k) + c2u( k ) + N A% Fn, m cos( nwn x %( k ) + mw mu ( k )) 0 m= + B% Fn, m sin( nwn x %( k ) + mw mu ( k )) y%( k ) = c1 x( k ) + c2u( k ) + N A% Gn, m cos( nwn x ( k ) + mwmu ( k )) (6) 0 m= + B% sin( nw x ( k ) + mw u ( k )) Gn, m n m 3. EXAPLE To test the ANN structure, a discrete-time nonlinear system has been simulated: 3 x( k) u ( k) x( k + 1) = 2d 4 i u ( K ) (7) 4 2 y( k ) = arctan( x( k) ) + log( u( k ) + 3) + 1 π The learning procedure can not use a series-parallel model, where the previous outputs of the real system are used instead of those from the simulated system. This approach is not feasible using a space structure, because neither the of the real system nor the internal representation taken by the ANN is known. Using the parallel model increases the number of training examples needed, and they are highly sensitive to initial conditions. The methodology consists on the following steps. 1) A first approximation is obtained by a linear network, with a fixed training signal. 2) The model is then expanded using the Fourier terms, the network is trained to learn the coefficients for the same training signal. This step is repeated until the mean square error (SE) decreases to a fixed order. 3) The NN is trained with different signals in order to make the system evolve through the whole working region of the space. 4) If the number of Fourier coefficients is too small or the training signal in step 2 was not rich enough, then the third step results in an important differences of the SE, between signals. If this happens then return to step 2 with a new training signal.
4 In our example we used as input a signal made up of 100 samples of a random step signal (Fig. 6), from which we obtain the desired output. As we use batching, the weights increments were calculated for each input-output pair but only updated at the end of the input signal processing. Input The test signal used for the example was k k2 u( k ) = 0. 5sin( ) sin( ) When the test signal is presented to the ANN the SE increases, just a small amount, as it can be seen in Fig. 9. (8) Output Fig. 6. Input signal used during training. The first approximation was made linear. After presenting to the ANN one hundred times the training signal, the linear model was considered correct. Different will produce different linear models depending upon on which areas of the space the system is evolving in. After the linear approximation was obtained, the ANN was expanded with successive Fourier coefficients and trained, for each one, one hundred times. The SE error curve (Fig. 7) reflects this procedure, and shows a fast decrease of the error when a new frequency element is added. Error Fig. 8. Plant output (continuous line) and ANN output(dashed line) with input as the training signal. Output Fig. 9. Plant and ANN output with input a test signal. Trainings Fig. 7. Error curve: logarithm of the mean square error obtained during training. Every hundred trainings a new frequency element is added. There are differences, (Fig. 10), between the evolution of the of the plant and its representation by the ANN. This is not surprising as a system may have multiple descriptions. The ANN will evolve the space representation which is more easily obtainable. The whole learning process and the initial weights, together with the used training signal, will modulate the shape of this representation. State The difference between the desired output and the one obtained from the network is unappreciable, for the training signal, when using eight Fourier terms (Fig. 8). When the test signal is presented to the resulting ANN it can happen that this new signal makes the system evolve through areas of the space not reached when the training signal was used. As those areas couldn't be learned while in the training phase, the result is an increment of the error. Fig. 10. State evolution of the real system (continuous line) and the trained system (doted line).
5 4. CONCLUSIONS The absence of a methodology for the use of ANN in the field of nonlinear systems modeling is an important factor which impedes a widespread use of them. The difficulty in finding the right number of neurons or layers, the uncertainty of the results (local minima, random initial values...) are still a deterrent to their industrial application. In this paper we present a topology of ANN which was thought in order to help the search of a solution to those uncertainties. This paper presents a structured ANN based on space models of the system equations, together with the use of sines and cosines and linear activation functions. The ANN calculates the Fourier coefficients of the nonlinear functions that relate the with the outputs and new s. As the Fourier coefficients can also be calculated analytically or numerically, the results from the ANN can be compared with those expected by classical methods. The use of space equations for system descriptions minimizes the number of to the neural network and allows a further study of the system properties. Here the model has been presented together with one example of its viability for the modeling of nonlinear discrete time systems Acknowledgment. This research work has received fundings from the CICYT ref and support from CERCA, Col lectiu d'estudis i Recerca en Control i Automàtica. REFERENCES Anderson, C.W. (1989) Learning to Control an Inverted Pendulum Using Neural Networks. IEEE Control Systems ag. April Codina, J., B. orcego, J.. Fuertes and A. Català. (1992) A Novel Neural Network Structure for Control. IEEE Int. Conf on Systems, an and Cybernetics. Chicago. pp Codina, J., J.C. Aguado and J.. Fuertes (1994) Capabilities of a Structured Neural Network. Learning and Comparison with Classical Techniques. To appear in Procc. of the ESANN Eruoconference. Elman J.L. (1988) Finding Structure in Time. Report University of California. San Diego. Jordan,.I. (1986) Serial Order: A Parallel Distributed Processing Approach. Institute for Cognitive Science. Report University of California. San Diego Narendra, K.S. and K. Parthasarathy (1990) Identification and Control of Dynamical Systems Using Neural Networks. IEEE Trans. on Neural Networks. arch Nguyen, D.H. and B. Widrow (1990) Neural Networks for Self-Learning Control Systems. IEEE Control Systems ag. April, Pao, Y. (1989) Adaptive Pattern Recognition and Neural Networks. Adisson-Wesley, Reading, A. Qin, S., H. Su and T.J. cavoy (1992) Comparison of Four Neural Net Learning ethods for Dynamic Systems Identification. IEEE Trans. on Nerual Networks. January Werbos, P.J. (1990) Backpropagation Through Time: What it Does and How to Do it. Proc. of the IEEE. Vol 78 N Wu, Q.H., B.W. Hogg and G.W. Irwin. (1992) A Neural Network Regulator for Turbogenerators. IEEE Trans. on Neural Networks January Yamada, T. and Yabuta T. (1993) Dynamic System Identification Using Neural Networks. IEEE Trans on Systems, an, and Cybernetics. January/February,
Neural Network Identification of Non Linear Systems Using State Space Techniques.
Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Application of
More informationA Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation
1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,
More informationTemporal Backpropagation for FIR Neural Networks
Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static
More informationA STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS
A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université
More informationIdentification of two-mass system parameters using neural networks
3ème conférence Internationale des énergies renouvelables CIER-2015 Proceedings of Engineering and Technology - PET Identification of two-mass system parameters using neural networks GHOZZI Dorsaf 1,NOURI
More informationANN Control of Non-Linear and Unstable System and its Implementation on Inverted Pendulum
Research Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet ANN
More informationPOWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH
Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic
More informationNeural Network Approach to Control System Identification with Variable Activation Functions
Neural Network Approach to Control System Identification with Variable Activation Functions Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 52 Abstract
More information( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks
Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Mehmet Önder Efe Electrical and Electronics Engineering Boðaziçi University, Bebek 80815, Istanbul,
More informationApplication of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption
Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES
More informationConvergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network
Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Fadwa DAMAK, Mounir BEN NASR, Mohamed CHTOUROU Department of Electrical Engineering ENIS Sfax, Tunisia {fadwa_damak,
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationArtificial Neural Networks. Historical description
Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of
More informationComparison of Heuristic Dynamic Programming and Dual Heuristic Programming Adaptive Critics for Neurocontrol of a Turbogenerator
Missouri University of Science and Technology Scholars' Mine Electrical and Computer Engineering Faculty Research & Creative Works Electrical and Computer Engineering 1-1-2002 Comparison of Heuristic Dynamic
More informationArtificial Neural Networks. Edward Gatt
Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very
More informationAdaptive Inverse Control
TA1-8:30 Adaptive nverse Control Bernard Widrow Michel Bilello Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract A plant can track an input command signal if it
More informationUnit III. A Survey of Neural Network Model
Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationWeight Initialization Methods for Multilayer Feedforward. 1
Weight Initialization Methods for Multilayer Feedforward. 1 Mercedes Fernández-Redondo - Carlos Hernández-Espinosa. Universidad Jaume I, Campus de Riu Sec, Edificio TI, Departamento de Informática, 12080
More informationLinear Least-Squares Based Methods for Neural Networks Learning
Linear Least-Squares Based Methods for Neural Networks Learning Oscar Fontenla-Romero 1, Deniz Erdogmus 2, JC Principe 2, Amparo Alonso-Betanzos 1, and Enrique Castillo 3 1 Laboratory for Research and
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationLab 5: 16 th April Exercises on Neural Networks
Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the
More informationECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann
ECLT 5810 Classification Neural Networks Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann Neural Networks A neural network is a set of connected input/output
More informationA FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE
A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven
More informationArtificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter
Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter (Chair) STF - China Fellow francesco.dimaio@polimi.it
More informationArtifical Neural Networks
Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................
More informationOnline Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks
Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks Ahmed Hussein * Kotaro Hirasawa ** Jinglu Hu ** * Graduate School of Information Science & Electrical Eng.,
More informationA SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING *
No.2, Vol.1, Winter 2012 2012 Published by JSES. A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL * Faruk ALPASLAN a, Ozge CAGCAG b Abstract Fuzzy time series forecasting methods
More informationDirect Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions
Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies
More informationLazy learning for control design
Lazy learning for control design Gianluca Bontempi, Mauro Birattari, Hugues Bersini Iridia - CP 94/6 Université Libre de Bruxelles 5 Bruxelles - Belgium email: {gbonte, mbiro, bersini}@ulb.ac.be Abstract.
More informationSystem Identification for the Hodgkin-Huxley Model using Artificial Neural Networks
Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12-17, 2007 System Identification for the Hodgkin-Huxley Model using Artificial Neural Networks Manish Saggar,
More informationy(x n, w) t n 2. (1)
Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,
More informationRecurrent Neural Networks
Recurrent Neural Networks Datamining Seminar Kaspar Märtens Karl-Oskar Masing Today's Topics Modeling sequences: a brief overview Training RNNs with back propagation A toy example of training an RNN Why
More informationBackpropagation Neural Net
Backpropagation Neural Net As is the case with most neural networks, the aim of Backpropagation is to train the net to achieve a balance between the ability to respond correctly to the input patterns that
More informationA Novel Activity Detection Method
A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of
More informationAdaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering
Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Bernard Widrow and Gregory L. Plett Department of Electrical Engineering, Stanford University, Stanford, CA 94305-9510 Abstract
More informationinear Adaptive Inverse Control
Proceedings of the 36th Conference on Decision & Control San Diego, California USA December 1997 inear Adaptive nverse Control WM15 1:50 Bernard Widrow and Gregory L. Plett Department of Electrical Engineering,
More informationNeuro-Fuzzy Comp. Ch. 4 March 24, R p
4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear
More informationArtificial Neural Network Based Approach for Design of RCC Columns
Artificial Neural Network Based Approach for Design of RCC Columns Dr T illai, ember I Karthekeyan, Non-member Recent developments in artificial neural network have opened up new possibilities in the field
More informationAN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009
AN INTRODUCTION TO NEURAL NETWORKS Scott Kuindersma November 12, 2009 SUPERVISED LEARNING We are given some training data: We must learn a function If y is discrete, we call it classification If it is
More informationNeural Networks, Computation Graphs. CMSC 470 Marine Carpuat
Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ
More informationNeural Network Control in a Wastewater Treatment Plant
Neural Network Control in a Wastewater Treatment Plant Miguel A. Jaramillo 1 ; Juan C. Peguero 2, Enrique Martínez de Salazar 1, Montserrat García del alle 1 ( 1 )Escuela de Ingenierías Industriales. (
More informationNeural networks. Chapter 19, Sections 1 5 1
Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10
More informationCSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning
CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.
More informationRobust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive
Leonardo Electronic Journal of Practices and Technologies ISSN 1583-1078 Issue 6, January-June 2005 p. 1-16 Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive
More informationArtificial Neural Networks
Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples
More informationA TSK-Type Quantum Neural Fuzzy Network for Temperature Control
International Mathematical Forum, 1, 2006, no. 18, 853-866 A TSK-Type Quantum Neural Fuzzy Network for Temperature Control Cheng-Jian Lin 1 Dept. of Computer Science and Information Engineering Chaoyang
More informationVasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks
C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,
More informationArtificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino
Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as
More informationApproximate solutions of dual fuzzy polynomials by feed-back neural networks
Available online at wwwispacscom/jsca Volume 2012, Year 2012 Article ID jsca-00005, 16 pages doi:105899/2012/jsca-00005 Research Article Approximate solutions of dual fuzzy polynomials by feed-back neural
More informationIntroduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen
Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /
More informationNeural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21
Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural
More informationAI Programming CS F-20 Neural Networks
AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols
More informationRelating Real-Time Backpropagation and. Backpropagation-Through-Time: An Application of Flow Graph. Interreciprocity.
Neural Computation, 1994 Relating Real-Time Backpropagation and Backpropagation-Through-Time: An Application of Flow Graph Interreciprocity. Francoise Beaufays and Eric A. Wan Abstract We show that signal
More informationEEE 241: Linear Systems
EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of
More informationCS:4420 Artificial Intelligence
CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationArtificial Neural Networks
Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks
More informationP. M. FONTE GONÇALO XUFRE SILVA J. C. QUADRADO DEEA Centro de Matemática DEEA ISEL Rua Conselheiro Emídio Navarro, LISBOA PORTUGAL
Wind Speed Prediction using Artificial Neural Networks P. M. FONTE GONÇALO XUFRE SILVA J. C. QUADRADO DEEA Centro de Matemática DEEA ISEL Rua Conselheiro Emídio Navarro, 1950-072 LISBOA PORTUGAL Abstract:
More informationForecasting River Flow in the USA: A Comparison between Auto-Regression and Neural Network Non-Parametric Models
Journal of Computer Science 2 (10): 775-780, 2006 ISSN 1549-3644 2006 Science Publications Forecasting River Flow in the USA: A Comparison between Auto-Regression and Neural Network Non-Parametric Models
More informationDesign Collocation Neural Network to Solve Singular Perturbed Problems with Initial Conditions
Article International Journal of Modern Engineering Sciences, 204, 3(): 29-38 International Journal of Modern Engineering Sciences Journal homepage:www.modernscientificpress.com/journals/ijmes.aspx ISSN:
More informationRESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK
ASIAN JOURNAL OF CIVIL ENGINEERING (BUILDING AND HOUSING) VOL. 7, NO. 3 (006) PAGES 301-308 RESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK S. Chakraverty
More informationARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES
International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC
More informationAddress for Correspondence
Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil
More informationAdaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance
International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More informationADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS
ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS RBFN and TS systems Equivalent if the following hold: Both RBFN and TS use same aggregation method for output (weighted sum or weighted average) Number of basis functions
More informationA Black-Box Approach in Modeling Valve Stiction
Vol:4, No:8, A Black-Box Approach in Modeling Valve Stiction H. Zabiri, N. Mazuki International Science Index, Mechanical and Mechatronics Engineering Vol:4, No:8, waset.org/publication/46 Abstract Several
More informationChristian Mohr
Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More information22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1
Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationGaussian Process for Internal Model Control
Gaussian Process for Internal Model Control Gregor Gregorčič and Gordon Lightbody Department of Electrical Engineering University College Cork IRELAND E mail: gregorg@rennesuccie Abstract To improve transparency
More informationROTARY INVERTED PENDULUM AND CONTROL OF ROTARY INVERTED PENDULUM BY ARTIFICIAL NEURAL NETWORK
Proc. Natl. Conf. Theor. Phys. 37 (2012, pp. 243-249 ROTARY INVERTED PENDULUM AND CONTROL OF ROTARY INVERTED PENDULUM BY ARTIFICIAL NEURAL NETWORK NGUYEN DUC QUYEN (1, NGO VAN THUYEN (1 (1 University of
More informationEquivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network
LETTER Communicated by Geoffrey Hinton Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network Xiaohui Xie xhx@ai.mit.edu Department of Brain and Cognitive Sciences, Massachusetts
More informationRESERVOIR INFLOW FORECASTING USING NEURAL NETWORKS
RESERVOIR INFLOW FORECASTING USING NEURAL NETWORKS CHANDRASHEKAR SUBRAMANIAN MICHAEL T. MANRY Department Of Electrical Engineering The University Of Texas At Arlington Arlington, TX 7619 JORGE NACCARINO
More informationEPL442: Computational
EPL442: Computational Learning Systems Lab 2 Vassilis Vassiliades Department of Computer Science University of Cyprus Outline Artificial Neuron Feedforward Neural Network Back-propagation Algorithm Notes
More informationMore on Neural Networks
More on Neural Networks Yujia Yan Fall 2018 Outline Linear Regression y = Wx + b (1) Linear Regression y = Wx + b (1) Polynomial Regression y = Wφ(x) + b (2) where φ(x) gives the polynomial basis, e.g.,
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationKeywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm
Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding
More informationFORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK
FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK Dusan Marcek Silesian University, Institute of Computer Science Opava Research Institute of the IT4Innovations
More informationBidirectional Representation and Backpropagation Learning
Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing
More informationNeural Networks (Part 1) Goals for the lecture
Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed
More informationIntroduction Biologically Motivated Crude Model Backpropagation
Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the
More informationNeural networks. Chapter 20, Section 5 1
Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of
More informationADAPTIVE NEURAL NETWORK MODEL PREDICTIVE CONTROL. Ramdane Hedjar. Received January 2012; revised May 2012
International Journal of Innovative Computing, Information and Control ICIC International c 13 ISSN 1349-4198 Volume 9, Number 3, March 13 pp. 145 157 ADAPTIVE NEURAL NETWORK MODEL PREDICTIVE CONTROL Ramdane
More informationLast update: October 26, Neural networks. CMSC 421: Section Dana Nau
Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications
More informationArtificial Neuron (Perceptron)
9/6/208 Gradient Descent (GD) Hantao Zhang Deep Learning with Python Reading: https://en.wikipedia.org/wiki/gradient_descent Artificial Neuron (Perceptron) = w T = w 0 0 + + w 2 2 + + w d d where
More informationCOMPARISON OF CLEAR-SKY MODELS FOR EVALUATING SOLAR FORECASTING SKILL
COMPARISON OF CLEAR-SKY MODELS FOR EVALUATING SOLAR FORECASTING SKILL Ricardo Marquez Mechanical Engineering and Applied Mechanics School of Engineering University of California Merced Carlos F. M. Coimbra
More informationPrediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks
Int. J. of Thermal & Environmental Engineering Volume 14, No. 2 (2017) 103-108 Prediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks M. A. Hamdan a*, E. Abdelhafez b
More informationNeural Networks Lecture 4: Radial Bases Function Networks
Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi
More information1. Introduction. 2. Artificial Neural Networks and Fuzzy Time Series
382 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.9, September 2008 A Comparative Study of Neural-Network & Fuzzy Time Series Forecasting Techniques Case Study: Wheat
More informationADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University
ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING Bernard Widrow 1, Gregory Plett, Edson Ferreira 3 and Marcelo Lamego 4 Information Systems Lab., EE Dep., Stanford University Abstract: Many
More informationC4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang
C4 Phenomenological Modeling - Regression & Neural Networks 4040-849-03: Computational Modeling and Simulation Instructor: Linwei Wang Recall.. The simple, multiple linear regression function ŷ(x) = a
More informationModel Reference Adaptive Control for Multi-Input Multi-Output Nonlinear Systems Using Neural Networks
Model Reference Adaptive Control for MultiInput MultiOutput Nonlinear Systems Using Neural Networks Jiunshian Phuah, Jianming Lu, and Takashi Yahagi Graduate School of Science and Technology, Chiba University,
More informationA Sliding Mode Controller Using Neural Networks for Robot Manipulator
ESANN'4 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), 8-3 April 4, d-side publi., ISBN -9337-4-8, pp. 93-98 A Sliding Mode Controller Using Neural Networks for Robot
More informationSupervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir
Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion
More informationRevision: Neural Network
Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn
More information