Model Reference Adaptive Control for Multi-Input Multi-Output Nonlinear Systems Using Neural Networks
|
|
- Stephen Porter
- 6 years ago
- Views:
Transcription
1 Model Reference Adaptive Control for MultiInput MultiOutput Nonlinear Systems Using Neural Networks Jiunshian Phuah, Jianming Lu, and Takashi Yahagi Graduate School of Science and Technology, Chiba University, Chiba , Japan Abstract This paper presents a method of MRAC(model reference adaptive control) for multiinput multioutput(mimo) nonlinear systems using NNs(neural networks). The control input is given by the sum of the output of a model reference adaptive controller and the output of the NN(neural network). The NN is used to compensate the nonlinearity of plant dynamics that is not taken into consideration in the usual MRAC. The role of the NN is to construct a linearized model by minimizing the output error caused by nonlinearities in the control systems. INTRODUCTION MRAC is an important class of adaptive control scheme [],[2]. In the direct MRAC scheme, the regulator is updated online so that the plant output follows the output of a reference model. In the MRAC of linear plant, the reference model and the controller structure are chosen in such a way that a parameter set of the regulator exists to ensure perfect model following [3],[4]. However, for nonlinear plants with unknown structures, it may not be possible to ensure perfect model following [5]. This paper presents a structure of MRAC system for MIMO nonlinear systems using NNs. The control input is given by the sum of the output of a model reference adaptive controller and the output of the NN. The role of the NN is to compensate for constructing a linearized model so as to minimize an output error caused by nonlinearities in the control system. The role of model reference adaptive controller is to perform the model matching for the uncertain linearized system to a given linear reference model. One of the distinctive features of the proposed structure is to give an efficient method for calculating the derivative of the system output with respect to the input by using one identified parameter in the linearized model and the internal variables of the NN, which enables to perform the backpropagation algorithm very efficiently. Furthermore, in the proposed method, if the plant is linear, it is unique that neural network does not need to operate. Finally, the computer simulation is done and the effectiveness of this control system is confirmed. LINEAR MRAC In this section, we briefly describe a MIMO linear discretetime MRAC, the controller is designed to realize a plant output Y (k) converges to reference model output Y m (k). Let us consider the MIMO linear discretetime system described by A(z)Y (k) = diag(z di )B(z)U(k) () A(z) = diag[a (z),, A p (z)], z d d B (z) z d pd B p (z) B(z) =..... z d pd p B p (z) z d ppd p B pp (z) and diag(z di ) = diag[z d,, z dp ]. A i (z) and B ij (z)(i =,, p; j =,, p) are scalar polynomials, and d ij (i =,, p; j =,, p) represent the known time delay. Furthermore, U(k) R p is the system input vector and Y (k) R p is the system output vector, and d i = min j p d ij (i =,, p). The matrices A(z) and B(z) are given by n m A(z) = I p A i z i, B(z) = B j z j i= j= the coefficient matrices A i and B j are assumed to be unknown, and det B(z) =, for z <. The upper bounds for the degree of each polynomial in () is known. The control system attempts to make the plant output Y (k) match the reference model output asymptotically, i.e. lim y i(k) y mi (k) ε (2) k for some specified constant ε and i =, 2,, p. The output Y m (k) of the reference model to the command input R(k) is given by A m (z)y m (k) = diag(z d i )B m (z)r(k) (3) A m (z) and B m (z) are left prime, and A m (z) and B m (z) can be given in advance. Let D(z) be an asymptotically stable matrix polynomial. Then there exist unique matrix polynomials R(z), S(z) which satisfy D(z) = A(z)S(z) diag(z d i )R(z) (4) R(z),S(z) defined by S(z) = diag[s (z),, S p (z)], R(z) = diag[r (z),, R p (z)] and degr i (z) = dega i (z), degs i (z) = d i, i =, 2,, p, R i (z) and S i (z) are scalar polynomials.
2 Using () and (4), we obtain D(z)(Y (k) Y m (k)) = diag(z di )B(z)S(z)U(k) diag(z d i )R(z)Y (k) D(z)Y m (k) (5) When, the control input U(k) is given by U(k) = B (z)s (z)[d(z)y m (k d i ) R(z)Y (k)] (6) It is clear that lim k (y i (k) y mi (k)) = holds, therefore, the control purpose can be realized. When the coefficients of A(z) and B(z) in () are unknown, the problem of estimation of the unknown parameters of plant arises. The system equation in () can be written as n m Y (k) = A i Y (k i) B j U(k d i j) i= j= = αx T (k) (7) T denotes the transpose, and α = [A, A 2,, A n, B, B,, B m ] X(k) = [ Y T (k ),, Y T (k n), U T (k d i ),, U T (k d i m)] The matrix α represents the unknown parameters of plant to be estimated. This is accomplished by using an identification model described by the equation Ŷ (k) = ˆα(k)X T (k) (8) ˆα(k) = [Â(k),, Ân(k), ˆB (k),, ˆB m (k)] and Ŷ (k) is an estimate of Y (k) at time k, and ˆα(k) is an adjustable parameter matrix. The parameter adjustment law, which ensures that the estimated parameters can converge to their true values, is given by ˆα(k) = ˆα(k ) Γ(k) = σ [ˆα(k )XT (k) Y (k)]x(k)γ(k ) X(k)Γ(k )X T (k) [ Γ(k ) λγ(k ] )XT (k)x(k)γ(k ) σ λx(k)γ(k )X T (k) (9) () Γ() = δi, δ > () < σ and < λ < 2 [2] [4]. The control input U(k) in the adaptive case is given by U(k) = ˆB (z)ŝ (z)[d(z)y m (k d i ) ˆR(z)Y (k)] (2) ˆR(z), ˆB(z) and Ŝ(z) are the estimates of R(z), B(z) and S(z), respectively. NONLINEAR MRAC When the inputoutput characteristic of controlled object is nonlinear, it is not possible to express like Eq. (). Then, let R(k) Figure.. x i (k) Reference Model Adaptive Controller Y m (k) Parameter Calculation NN Parameter Estimation E(k) Nonlinear System U(k) Y(k) Structure of nonlinear adaptive control system w ji (k) p j (k) Figure. 2. w lj (k) q l (k) U(k) Nonlinear system System configuration with NN Y m (k) Y(k) E(k) the unknown system be expressed by a nonlinear discretetime system as Y (k) = H(Y T (k ),, Y T (k n), U T (k d i ),, U T (k d i m)) (3), H() is the unknown nonlinear function vector, Y (k) is the plant output, U(k) is the control signal, n and m are the number of past outputs and inputs of the plant depending on the plant order. In the case, when the input in (6) is used to control nonlinear discretetime system in (3), the problem of output error will arise. To keep the plant output Y (k) converges to the reference model output Y m (k), we synthesize the control input U(k) by the following equation U(k) = V (k) V (k) (4), V (k)(= [v (k),, v p (k)] T )is multioutput of the adaptive controller, V (k)(= [ v (k),, v p (k)] T ) is multioutput of the NN. It is possible to show V (k) and V (k) as follows V (k) = ˆB (z)ŝ (z)[d(z)y m (kd i ) ˆR(z)Y (k)] (5) V (k) = Ĥ(V T (k d i ), Y T m (k d i ), Y T (k ),, Y T (k n), U T (k d i ),, U T (k d i m)) (6)
3 The block diagram of the MIMO nonlinear MRAC system with NN is shown in Figure.. Using the above approach, the NN will be trained. The method of training is done by adjusting the weight of the NN until the output error limit lim k e i (k) = lim k y i (k) y mi (k) ε is met. COMPOSITION OF THE NN Figure 2 shows system configuration of inputoutput relation of the system with NN. The NN consists of three layers: an input layer, an output layer and an intermediate or hidden layer. Let x i (k) be the input to the ith node in the input layer, p j (k) be the input to the jth node in the hidden layer, q l (k) be the input to the lth node in the output layer. Furthermore, w ji be the weight between the input layer and the hidden layer, w lj be the weight between the hidden layer and the output layer. In Figure. 2, the control input is given by the sum of the output of a model reference adaptive controller and the output of the NN. The NN is used to compensate the nonlinearity of plant dynamics that is not taken into consideration in the usual MRAC. The role of the NN is to construct a linearized model by minimizing the output error caused by nonlinearities in the control systems. The input x i (k) to NN is given as x i (k) {V T (k d i ), Y T m (k d i ), Y T (k ),, Y T (k n), U T (k d i ),, U T (k d i m)} (7) Therefore, nonlinear function of a MIMO nonlinear system can be approximated by NN, and the number of components of the input layer is (n m 2) p. LEARNING OF THE NN From Figure. 2, we obtain p j (k) = i w ji (k)x i (k) (8) q l (k) = w lj (k)f(p j (k)) j (9) v l (k) = f(q l (k)) (2) f() is the sigmoid function and l =, 2,, p. The sigmoid function f() is chosen as f(x) = 2a exp( µx) a (2) µ >, a is a specified constant such that a, and f(x) satisfies a < f(x) < a. The derivative of the sigmoid function f() is as follows: f (x) = µ (a f(x))(a f(x)) (22) 2a Equation (8) shows the relation between intermediate layer and input layer, and (9) shows the relation between output layer and intermediate layer. The output of the NN can be obtained from (2). The error function (evaluation function) is defined as E i (k) = 2 [y mi(k) y i (k)] 2 (23) i =, 2,, p. The objective is to minimize the error function E i (k) by taking the error gradient with respect to the parameters or weight vector, say w(k), that is to be adapted. The weights are then updated by using w lj (k) = η E l(k) w lj(k ) α(k) (24) w ji (k) = η E l(k) w ji(k ) α(k) (25) η and α(k) are the learning rate and momentum, respectively, and α(k) = α(k ) α and l =, 2,, p. The upper limit of α(k) is set to be A. To obtain the / and / in (24), (25), we can write = E l(k) u l(k d l ) (26) = E l(k) u l(k d l ) f(p j(k)) (27) = (y ml (k) y l (k)), =, = f(p j (k)), = w lj (k), = x i (k), = µ 2a (a f(q l(k)))(a f(q l (k))), = µ 2a (a f(p j(k)))(a f(p j (k))) Since gradients / and / can be calculated, therefore it is possible to train the NN. Again, / is given by = = From Figure. 2, we obtain v l (k d l ) v l(k d l ) / ul (k d l ) v l (k d l ) v l (k d l ) (28) u l (k d l ) = v l (k d l ) v l (k d l ) (29)
4 Then v l (k d l ) = v l(k d l ) v l (k d l ) and v l (k d l )/ v l (k d l ) is given by v l (k d l ) v l (k d l ) = f(q l(k)) x l (k) = f(q l(k)) p j(k) x l (k) (3) (3) x l (k) = w lj(k) (32) Furthermore, the linear model of the plant is constructed using the estimated parameters. The output of this linear model is ŷ l (k). When we assume that the nonlinearity of the plant is relatively small, then it is possible to approximate as y l (k) ŷ l (k), and the approximate value of / u l (k d l ) can be calculated as below. From (8), the following equation holds. ŷ l (k) = â l (k)y l (k ) â ln (k)y l (k n) ˆb l (k)v (k d ) ˆb lm (k)v (k d m) ˆb lp (k)v p (k d p ) ˆb lpm (k)v p (k d p m) (33) From (33), we obtain v l (k d l ) ŷ l(k) v l (k d l ) = ˆb ll (k) (34) Using (3), (3), (34), it is possible to write (28) as H (k) = f(q l(k)) = ˆb ll (k) H (k) f(p j(k)) p j(k) x l (k) (35) COMPUTER SIMULATION As an example of the nonlinear system, two cases are taken up. In all cases λ =, σ =.98, and δ = 4 are fixed. Example : Let us consider the MIMO nonlinear discretetime system described by y (k) =.3y (k ).3y 2 (k ) u (k d ).8u (k d ).45u 2 (k d ) u (k d )u 2 (k d ).5 u 2 (k d ) y 2 (k) =.6y 2 (k ).62u (k d 2 ) u 2 (k d 2 ).7u 2 (k d 2 ) u 2 2(k d 2 ) In this example, we assume A m (z) = B m (z) = D(z) = I (2 2) and diag(z d i ) = diag[z, z ], then the output Y m (k) of the reference model to the command input R(k) is Figure y ym Y m (k) and Y (k) before learning by NN Figure. 4. y ym Y m (k) and Y (k) after learning by NN given by Y m (k) = diag[z, z ]R(k). Figure. 3 shows the desired output Y m (k) and plant output Y (k) before learning by NN. The results of Figure. 3 show that the error of Y (k) and Y m (k) is big. Figure. 4 shows Y m (k) and Y (k) after learning by NN,, the number of nodes in the input layer was 8, in the hidden layer was 8, and in the output layer was 2, and α =., α() =.2, A =.8, η =.25, a = 5, and µ =.2 are fixed. The results of Figure. 4 show that Y (k) can converge to Y m (k) after learning by NN. Example 2: Let us consider the MIMO nonlinear discretetime system described by y (k) = y (k ) y 2 2 (k ) u (k d ) y 2 (k) = y (k )y 2 (k ) y 2 2 (k ) u 2 (k d 2 ) The output Y m (k) of the reference model to the command input R(k) is given by Y m (k) = diag[z, z ]R(k). Figure. 5 shows Y m (k) and Y (k) before learning by NN. It can be seen from Figure. 5, that Y (k) to diverge. Figure. 6 shows Y m (k) and Y (k) after learning by NN,, the number of nodes in the input layer was 6, in the hidden layer was 6, and in the output layer 2, and α =., α() =.23, A =.8, η =.8, a = 5, and µ =. are fixed. It can be seen from Figure. 6 again that Y (k) can
5 Figure y ym Y m(k) and Y (k) before learning by NN Figure. 6. y ym Y m (k) and Y (k) after learning by NN CONCLUSION We have proposed a method of MRAC for MIMO nonlinear systems using NNs. The control input is given by the sum of the output of a model reference adaptive controller and the output of the NN. The NN is used to compensate the nonlinearity of plant dynamics that is not taken into consideration in the usual MRAC. From simulation results, it has been shown that the plant output Y (k) can converge to the desired output Y m (k) after learning by NN for nonlinear discretetime system. REFERENCES [] K. J. Åström and B. Wittenmark, Adaptive Control, AddisonWesley, 989. [2] J. Lu and T. Yahagi, Discretetime MRAC for nonminimum phase systems with disturbances using approximate inverse systems, IEE Proc. D, vol. 44, no. 5, pp , 997. [3] J. Lu and T. Yahagi, New design method for MRAC for nonminimum phase discretetime systems with disturbances, IEE Proc. D, vol. 4, no., pp. 34 4, 993. [4] J. Lu, M. Shafiq, and T. Yahagi, A method for adaptive control of nonminimum phase continuoustime systems based on polezero placement, IEICE Trans. Fundamentals, vol. E8A, no. 6, pp. 9 5, 997. [5] K. S. Narendra and K. Parthasarathy, Identification and control of dynamical system using neural networks, IEEE Trans. NNs, vol., no., pp. 4 27, 99. converge to Y m (k) after learning by NN..
Modelling and Control of Dynamic Systems. Stability of Linear Systems. Sven Laur University of Tartu
Modelling and Control of Dynamic Systems Stability of Linear Systems Sven Laur University of Tartu Motivating Example Naive open-loop control r[k] Controller Ĉ[z] u[k] ε 1 [k] System Ĝ[z] y[k] ε 2 [k]
More informationNN V: The generalized delta learning rule
NN V: The generalized delta learning rule We now focus on generalizing the delta learning rule for feedforward layered neural networks. The architecture of the two-layer network considered below is shown
More informationNeural Network Identification of Non Linear Systems Using State Space Techniques.
Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica
More informationOnline Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks
Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks Ahmed Hussein * Kotaro Hirasawa ** Jinglu Hu ** * Graduate School of Information Science & Electrical Eng.,
More informationArtificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter
Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter (Chair) STF - China Fellow francesco.dimaio@polimi.it
More informationLecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning
Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function
More informationNeural networks III: The delta learning rule with semilinear activation function
Neural networks III: The delta learning rule with semilinear activation function The standard delta rule essentially implements gradient descent in sum-squared error for linear activation functions. We
More informationVasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks
C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,
More informationChapter 6 State-Space Design
Chapter 6 State-Space Design wo steps. Assumption is made that we have all the states at our disposal for feedback purposes (in practice, we would not measure all these states). his allows us to implement
More informationA Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation
1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,
More informationA STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS
A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université
More informationModeling and Control Based on Generalized Fuzzy Hyperbolic Model
5 American Control Conference June 8-5. Portland OR USA WeC7. Modeling and Control Based on Generalized Fuzzy Hyperbolic Model Mingjun Zhang and Huaguang Zhang Abstract In this paper a novel generalized
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationOutput Adaptive Model Reference Control of Linear Continuous State-Delay Plant
Output Adaptive Model Reference Control of Linear Continuous State-Delay Plant Boris M. Mirkin and Per-Olof Gutman Faculty of Agricultural Engineering Technion Israel Institute of Technology Haifa 3, Israel
More informationPOWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH
Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic
More informationNoise Reduction of JPEG Images by Sampled-Data H Optimal ε Filters
SICE Annual Conference 25 in Okayama, August 8-1, 25 Okayama University, Japan Noise Reduction of JPEG Images by Sampled-Data H Optimal ε Filters H. Kakemizu,1, M. Nagahara,2, A. Kobayashi,3, Y. Yamamoto,4
More informationMulti-layer Neural Networks
Multi-layer Neural Networks Steve Renals Informatics 2B Learning and Data Lecture 13 8 March 2011 Informatics 2B: Learning and Data Lecture 13 Multi-layer Neural Networks 1 Overview Multi-layer neural
More informationSystem Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating
9 American Control Conference Hyatt Regency Riverfront, St Louis, MO, USA June 1-1, 9 FrA13 System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating M A Santillo
More informationinear Adaptive Inverse Control
Proceedings of the 36th Conference on Decision & Control San Diego, California USA December 1997 inear Adaptive nverse Control WM15 1:50 Bernard Widrow and Gregory L. Plett Department of Electrical Engineering,
More informationFiltered-X LMS vs repetitive control for active structural acoustic control of periodic disturbances
Filtered-X LMS vs repetitive control for active structural acoustic control of periodic disturbances B. Stallaert 1, G. Pinte 2, S. Devos 2, W. Symens 2, J. Swevers 1, P. Sas 1 1 K.U.Leuven, Department
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Application of
More information( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks
Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Mehmet Önder Efe Electrical and Electronics Engineering Boðaziçi University, Bebek 80815, Istanbul,
More informationAdaptive Control of an Aircraft with Uncertain Nonminimum-Phase Dynamics
1 American Control Conference Palmer House Hilton July 1-3, 1. Chicago, IL, USA Adaptive Control of an Aircraft with Uncertain Nonminimum-Phase Dynamics Ahmad Ansari and Dennis S. Bernstein Abstract This
More informationIntroduction to Machine Learning
Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More information8-1: Backpropagation Prof. J.C. Kao, UCLA. Backpropagation. Chain rule for the derivatives Backpropagation graphs Examples
8-1: Backpropagation Prof. J.C. Kao, UCLA Backpropagation Chain rule for the derivatives Backpropagation graphs Examples 8-2: Backpropagation Prof. J.C. Kao, UCLA Motivation for backpropagation To do gradient
More informationAdvanced Methods for Recurrent Neural Networks Design
Universidad Autónoma de Madrid Escuela Politécnica Superior Departamento de Ingeniería Informática Advanced Methods for Recurrent Neural Networks Design Master s thesis presented to apply for the Master
More informationA TSK-Type Quantum Neural Fuzzy Network for Temperature Control
International Mathematical Forum, 1, 2006, no. 18, 853-866 A TSK-Type Quantum Neural Fuzzy Network for Temperature Control Cheng-Jian Lin 1 Dept. of Computer Science and Information Engineering Chaoyang
More informationHigh Precision Control of Ball Screw Driven Stage Using Repetitive Control with Sharp Roll-off Learning Filter
High Precision Control of Ball Screw Driven Stage Using Repetitive Control with Sharp Roll-off Learning Filter Tadashi Takemura and Hiroshi Fujimoto The University of Tokyo --, Kashiwanoha, Kashiwa, Chiba,
More informationThe Kernel Trick. Robert M. Haralick. Computer Science, Graduate Center City University of New York
The Kernel Trick Robert M. Haralick Computer Science, Graduate Center City University of New York Outline SVM Classification < (x 1, c 1 ),..., (x Z, c Z ) > is the training data c 1,..., c Z { 1, 1} specifies
More informationy(x n, w) t n 2. (1)
Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,
More informationNeural Networks: Backpropagation
Neural Networks: Backpropagation Seung-Hoon Na 1 1 Department of Computer Science Chonbuk National University 2018.10.25 eung-hoon Na (Chonbuk National University) Neural Networks: Backpropagation 2018.10.25
More informationBack-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples
Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure
More informationADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS
ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS RBFN and TS systems Equivalent if the following hold: Both RBFN and TS use same aggregation method for output (weighted sum or weighted average) Number of basis functions
More informationA Generalised Minimum Variance Controller for Time-Varying MIMO Linear Systems with Multiple Delays
WSEAS RANSACIONS on SYSEMS and CONROL A Generalised Minimum Variance Controller for ime-varying MIMO Linear Systems with Multiple Delays ZHENG LI School of Electrical, Computer and elecom. Eng. University
More informationPole placement control: state space and polynomial approaches Lecture 3
: state space and polynomial es Lecture 3 O. Sename 1 1 Gipsa-lab, CNRS-INPG, FRANCE Olivier.Sename@gipsa-lab.fr www.gipsa-lab.fr/ o.sename 10th January 2014 Outline The classical structure Towards 2
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More information4.0 Update Algorithms For Linear Closed-Loop Systems
4. Update Algorithms For Linear Closed-Loop Systems A controller design methodology has been developed that combines an adaptive finite impulse response (FIR) filter with feedback. FIR filters are used
More informationSTRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAMIC SYSTEMS MODELING
STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAIC SYSTES ODELING J. CODINA, R. VILLÀ and J.. FUERTES UPC-Facultat d Informàtica de Barcelona, Department of Automatic Control and Computer Engineeering, Pau
More informationAdaptive Inverse Control
TA1-8:30 Adaptive nverse Control Bernard Widrow Michel Bilello Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract A plant can track an input command signal if it
More informationMatrix Arithmetic. j=1
An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column
More informationNonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto
Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Nonlinear System Identification Given a data set Z N = {y(k),
More informationVariable Learning Rate LMS Based Linear Adaptive Inverse Control *
ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of
More informationLazy learning for control design
Lazy learning for control design Gianluca Bontempi, Mauro Birattari, Hugues Bersini Iridia - CP 94/6 Université Libre de Bruxelles 5 Bruxelles - Belgium email: {gbonte, mbiro, bersini}@ulb.ac.be Abstract.
More informationSection 9.2: Matrices.. a m1 a m2 a mn
Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationLearning Neural Networks
Learning Neural Networks Neural Networks can represent complex decision boundaries Variable size. Any boolean function can be represented. Hidden units can be interpreted as new features Deterministic
More informationTemporal Backpropagation for FIR Neural Networks
Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationMultivariable ARMA Systems Making a Polynomial Matrix Proper
Technical Report TR2009/240 Multivariable ARMA Systems Making a Polynomial Matrix Proper Andrew P. Papliński Clayton School of Information Technology Monash University, Clayton 3800, Australia Andrew.Paplinski@infotech.monash.edu.au
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More informationConvolutional networks. Sebastian Seung
Convolutional networks Sebastian Seung Convolutional network Neural network with spatial organization every neuron has a location usually on a grid Translation invariance synaptic strength depends on locations
More informationHover Control for Helicopter Using Neural Network-Based Model Reference Adaptive Controller
Vol.13 No.1, 217 مجلد 13 العدد 217 1 Hover Control for Helicopter Using Neural Network-Based Model Reference Adaptive Controller Abdul-Basset A. Al-Hussein Electrical Engineering Department Basrah University
More informationCOGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.
COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta
More informationDistributed Optimization over Networks Gossip-Based Algorithms
Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random
More informationForecasting Time Series by SOFNN with Reinforcement Learning
Forecasting Time Series by SOFNN ith einforcement Learning Takashi Kuremoto, Masanao Obayashi, and Kunikazu Kobayashi Abstract A self-organized fuzzy neural netork (SOFNN) ith a reinforcement learning
More informationOn-line Learning of Robot Arm Impedance Using Neural Networks
On-line Learning of Robot Arm Impedance Using Neural Networks Yoshiyuki Tanaka Graduate School of Engineering, Hiroshima University, Higashi-hiroshima, 739-857, JAPAN Email: ytanaka@bsys.hiroshima-u.ac.jp
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More informationCascade Neural Networks with Node-Decoupled Extended Kalman Filtering
Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Abstract Most neural networks
More informationIntroduction to Neural Networks
Introduction to Neural Networks Steve Renals Automatic Speech Recognition ASR Lecture 10 24 February 2014 ASR Lecture 10 Introduction to Neural Networks 1 Neural networks for speech recognition Introduction
More informationLinearization problem. The simplest example
Linear Systems Lecture 3 1 problem Consider a non-linear time-invariant system of the form ( ẋ(t f x(t u(t y(t g ( x(t u(t (1 such that x R n u R m y R p and Slide 1 A: f(xu f(xu g(xu and g(xu exist and
More informationPOSITIVE PARTIAL REALIZATION PROBLEM FOR LINEAR DISCRETE TIME SYSTEMS
Int J Appl Math Comput Sci 7 Vol 7 No 65 7 DOI: 478/v6-7-5- POSITIVE PARTIAL REALIZATION PROBLEM FOR LINEAR DISCRETE TIME SYSTEMS TADEUSZ KACZOREK Faculty of Electrical Engineering Białystok Technical
More informationComputing Neural Network Gradients
Computing Neural Network Gradients Kevin Clark 1 Introduction The purpose of these notes is to demonstrate how to quickly compute neural network gradients in a completely vectorized way. It is complementary
More informationDesign of a Neuro-Controller for Multivariable Nonlinear Time-Varying Systems
Design of a Neuro-Controller for Multivariable Nonlinear Time-Varying Systems H Al-Duwaish King Fahd University of Petroleum & Minerals Department of Electrical Engineering POBox 677 Saudi Arabia hduwaish@kfupmedusa
More informationA New Concept using LSTM Neural Networks for Dynamic System Identification
A New Concept using LSTM Neural Networks for Dynamic System Identification Yu Wang Abstract Recently, Recurrent Neural Network becomes a very popular research topic in machine learning field. Many new
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationArtificial Neural Network : Training
Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural
More informationLinear Neural Networks
Chapter 10 Linear Neural Networks In this chapter, we introduce the concept of the linear neural network. 10.1 Introduction and Notation 1. The linear neural cell, or node has the schematic form as shown
More informationConvergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network
Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Fadwa DAMAK, Mounir BEN NASR, Mohamed CHTOUROU Department of Electrical Engineering ENIS Sfax, Tunisia {fadwa_damak,
More information1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co
Multivariable Receding-Horizon Predictive Control for Adaptive Applications Tae-Woong Yoon and C M Chow y Department of Electrical Engineering, Korea University 1, -a, Anam-dong, Sungbu-u, Seoul 1-1, Korea
More informationData assimilation with and without a model
Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,
More informationLecture 5: Recurrent Neural Networks
1/25 Lecture 5: Recurrent Neural Networks Nima Mohajerin University of Waterloo WAVE Lab nima.mohajerin@uwaterloo.ca July 4, 2017 2/25 Overview 1 Recap 2 RNN Architectures for Learning Long Term Dependencies
More informationGradient Descent Training Rule: The Details
Gradient Descent Training Rule: The Details 1 For Perceptrons The whole idea behind gradient descent is to gradually, but consistently, decrease the output error by adjusting the weights. The trick is
More informationTemperature Control of a Mold Model using Multiple-input Multiple-output Two Degree-of-freedom Generalized Predictive Control
Temperature Control of a Mold Model using Multiple-input Multiple-output Two Degree-of-freedom Generalized Predictive Control Naoki Hosoya, Akira Yanou, Mamoru Minami and Takayuki Matsuno Graduate School
More informationI = i 0,
Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example
More informationTemperature control using neuro-fuzzy controllers with compensatory operations and wavelet neural networks
Journal of Intelligent & Fuzzy Systems 17 (2006) 145 157 145 IOS Press Temperature control using neuro-fuzzy controllers with compensatory operations and wavelet neural networks Cheng-Jian Lin a,, Chi-Yung
More informationArtificial Neural Networks
0 Artificial Neural Networks Based on Machine Learning, T Mitchell, McGRAW Hill, 1997, ch 4 Acknowledgement: The present slides are an adaptation of slides drawn by T Mitchell PLAN 1 Introduction Connectionist
More informationLifted approach to ILC/Repetitive Control
Lifted approach to ILC/Repetitive Control Okko H. Bosgra Maarten Steinbuch TUD Delft Centre for Systems and Control TU/e Control System Technology Dutch Institute of Systems and Control DISC winter semester
More informationAPPLICATION OF ADAPTIVE CONTROLLER TO WATER HYDRAULIC SERVO CYLINDER
APPLICAION OF ADAPIVE CONROLLER O WAER HYDRAULIC SERVO CYLINDER Hidekazu AKAHASHI*, Kazuhisa IO** and Shigeru IKEO** * Division of Science and echnology, Graduate school of SOPHIA University 7- Kioicho,
More informationConstruction of latin squares of prime order
Construction of latin squares of prime order Theorem. If p is prime, then there exist p 1 MOLS of order p. Construction: The elements in the latin square will be the elements of Z p, the integers modulo
More informationAdaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance
International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of
More informationECE521 Lectures 9 Fully Connected Neural Networks
ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance
More informationLearning strategies for neuronal nets - the backpropagation algorithm
Learning strategies for neuronal nets - the backpropagation algorithm In contrast to the NNs with thresholds we handled until now NNs are the NNs with non-linear activation functions f(x). The most common
More informationDESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof
DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg
More informationNeural Networks and Deep Learning
Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost
More informationStatistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks
Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks Jan Drchal Czech Technical University in Prague Faculty of Electrical Engineering Department of Computer Science Topics covered
More informationPole-Placement Design A Polynomial Approach
TU Berlin Discrete-Time Control Systems 1 Pole-Placement Design A Polynomial Approach Overview A Simple Design Problem The Diophantine Equation More Realistic Assumptions TU Berlin Discrete-Time Control
More informationSUCCESSIVE POLE SHIFTING USING SAMPLED-DATA LQ REGULATORS. Sigeru Omatu
SUCCESSIVE POLE SHIFING USING SAMPLED-DAA LQ REGULAORS oru Fujinaka Sigeru Omatu Graduate School of Engineering, Osaka Prefecture University, 1-1 Gakuen-cho, Sakai, 599-8531 Japan Abstract: Design of sampled-data
More informationOptimal Polynomial Control for Discrete-Time Systems
1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should
More informationMore on Neural Networks
More on Neural Networks Yujia Yan Fall 2018 Outline Linear Regression y = Wx + b (1) Linear Regression y = Wx + b (1) Polynomial Regression y = Wφ(x) + b (2) where φ(x) gives the polynomial basis, e.g.,
More informationComputational Graphs, and Backpropagation. Michael Collins, Columbia University
Computational Graphs, and Backpropagation Michael Collins, Columbia University A Key Problem: Calculating Derivatives where and p(y x; θ, v) = exp (v(y) φ(x; θ) + γ y ) y Y exp (v(y ) φ(x; θ) + γ y ) φ(x;
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationADAPTIVE NEURAL NETWORK MODEL PREDICTIVE CONTROL. Ramdane Hedjar. Received January 2012; revised May 2012
International Journal of Innovative Computing, Information and Control ICIC International c 13 ISSN 1349-4198 Volume 9, Number 3, March 13 pp. 145 157 ADAPTIVE NEURAL NETWORK MODEL PREDICTIVE CONTROL Ramdane
More informationThe Kernel Trick, Gram Matrices, and Feature Extraction. CS6787 Lecture 4 Fall 2017
The Kernel Trick, Gram Matrices, and Feature Extraction CS6787 Lecture 4 Fall 2017 Momentum for Principle Component Analysis CS6787 Lecture 3.1 Fall 2017 Principle Component Analysis Setting: find the
More informationAdaptive Fuzzy Modelling and Control for Discrete-Time Nonlinear Uncertain Systems
American Control Conference June 8-,. Portland, OR, USA WeB7. Adaptive Fuzzy Modelling and Control for Discrete-Time nlinear Uncertain Systems Ruiyun Qi and Mietek A. Brdys Abstract This paper presents
More informationNONLINEAR system control is an important tool that
136 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 16, NO. 5, OCTOBER 008 A Functional-Link-Based Neurofuzzy Network for Nonlinear System Control Cheng-Hung Chen, Student Member, IEEE, Cheng-Jian Lin, Member,
More informationRelating Real-Time Backpropagation and. Backpropagation-Through-Time: An Application of Flow Graph. Interreciprocity.
Neural Computation, 1994 Relating Real-Time Backpropagation and Backpropagation-Through-Time: An Application of Flow Graph Interreciprocity. Francoise Beaufays and Eric A. Wan Abstract We show that signal
More informationChapter 4 Neural Networks in System Identification
Chapter 4 Neural Networks in System Identification Gábor HORVÁTH Department of Measurement and Information Systems Budapest University of Technology and Economics Magyar tudósok körútja 2, 52 Budapest,
More informationMultilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)
Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w x + w 2 x 2 + w 0 = 0 Feature x 2 = w w 2 x w 0 w 2 Feature 2 A perceptron can separate
More information