Hopfield Neural Networks for Parametric Identification of Dynamical Systems
|
|
- Madison Harris
- 5 years ago
- Views:
Transcription
1 Neural Processing Letters (2005) 21: Springer 2005 DOI /s Hopfield Neural Networks for Parametric Identification of Dynamical Systems MIGUEL ATENCIA 1, GONZALO JOYA 2 and FRANCISCO SANDOVAL 2 1 Departamento de Matemática Aplicada. E.T.S.I. Informática 2 Departamento de Tecnología Electrónica. E.T.S.I. Telecomunicación Universidad de Málaga. Campus de Teatinos, Málaga, Spain. matencia@ctima.uma.es Abstract. In this work, a novel method, based upon Hopfield neural networks, is proposed for parameter estimation, in the context of system identification. The equation of the neural estimator stems from the applicability of Hopfield networks to optimization problems, but the weights and the biases of the resulting network are time-varying, since the target function also varies with time. Hence the stability of the method cannot be taken for granted. In order to compare the novel technique and the classical gradient method, simulations have been carried out for a linearly parameterized system, and results show that the Hopfield network is more efficient than the gradient estimator, obtaining lower error and less oscillations. Thus the neural method is validated as an on-line estimator of the time-varying parameters appearing in the model of a nonlinear physical system. Key words. system identification, Hopfield neural networks, parameter estimation, adaptive control, optimization 1. Introduction System identification can be defined as the characterization of a dynamical system, by observing its measurable behaviour. Identification has been studied since decades [12, 20] from a variety of viewpoints and research communities, such as statistical regression estimation, signal processing filtering, and control engineering adaptive control. When a priori information about the rules that govern a system either do not exist or are too complex, e.g. in electrocardiographic signals, identification techniques are used to build a model by only observing input output data. Then, the system is called a black box since the internal behaviour is unknown. On the contrary, in many physical systems, our knowledge of mechanical, chemical or electrical laws permit us to formulate a model, which is the fundamental tool for studying the system, either analytically or through simulations. However, no matter how deep our physical insight, the parameters of any model present inaccuracies. For instance, measurement devices involve some error, transmission systems produce noise, a robot can take a stone with unknown mass or the friction coefficient of some material can only be approximately known. Due to these perturbations,
2 144 M. ATENCIA ET AL. although the model is still valid, the numerical value of its parameters must be confronted with observed data, and the system is called a gray box since identification must be applied to validate or adapt the a priori formulated model, which is referred to as a parametric model [17]. This paper is focused in on-line identification for gray-box models, i.e. continuously obtaining an estimation of the system parameters. This subject has received increased attention after the successful development of adaptive control methods [13] that consist of an identification module and a controller that would achieve optimal control, should the actual system match the identified one. The mathematical formulation of a dynamical system is a possibly vectorial ordinary differential equation (ODE) ẋ = f(x,u,π) where x(t) is the state vector, u(t) is the input vector, which can be regarded as a control signal, and the parameter vector π(t) may be uncertain and/or time-varying. Notice that, along the paper, we adhere to the notation ẋ for the time derivative. From this formulation, gray box identification is achieved by parameter estimation, i.e. by calculating at each time instant a value ˆπ that approaches the actual value π as much as possible, thus minimizing the estimation error π = ˆπ π. Therefore, the continuous estimation evolves together with the observation of the dynamical system, so the estimation is a dynamical system itself given by an ODE ˆπ = g( ˆπ,x,u). In the above paragraph, we have formulated identification as minimization of the estimation error π. In doing so, we emphasize that the appropriate framework for identification is optimization. Indeed, most proposed techniques for identification are somehow based on optimization methods. On one hand, the simplest method for on-line identification is the gradient method, which is supported by the same rationale as gradient descent optimization: the estimation should evolve in the direction that best minimizes the error, which is the negative gradient of the error function. However, the actual parameter value π is unknown and so it is the estimation error π. Thus, the norm of the output prediction error e = f(x,u, ˆπ) ẋ, or equivalently its square e 2, is instead used as the target function, and the gradient method leads to the estimator dynamics ˆπ = k e( ˆπ) 2. The tuning of the estimation gain k is a key aspect of the estimator design and must be repeated for each particular system. If k is small, the convergence to the actual parameter is slow. If k is too large, oscillations may appear that cause not only larger error but also high frequency noise, which may excite unmodelled dynamics if the estimation is a part of a closed loop controller. On the other hand, several methods have been proposed for batch estimation, i.e. for identification that is performed only once, after the observation of the complete evolution of the system. Most classical batch estimators are based upon minimization of least square error, while heuristic techniques have recently been proposed based upon genetic algorithms [15], which are known to be global optimizers. It is remarkable that most identification techniques have been applied to the simpler linear in the parameters (LIP) systems, with the form f(x,u,π)= A(x,u)π, while identification
3 HOPFIELD NETWORKS FOR PARAMETRIC IDENTIFICATION 145 of nonlinearly parameterized systems is still almost unexplored [14]. In particular, most robotic systems can be cast into the LIP form [18]. In this contribution, we apply the optimization methodology of Tank and Hopfield [19] to on-line identification, by designing a Hopfield neural network whose Lyapunov function is made coincident with the prediction error, so that the network evolution approaches a minimum of the error. However, the prediction error is time-varying, hence the above procedure results in time-varying weights and biases. Therefore, in contrast to standard Hopfield networks, stability cannot be taken for granted, since the neural estimator is a non-autonomous dynamical system [21], thus requiring a specific stability analysis that will be published elsewhere. The Hopfield estimator has already been applied to an epidemiological problem [2], which happens to be a difficult task. In contrast, the method seems to be especially well suited to robotic systems. For this class of models, when compared to the gradient method, the proposed technique is shown to alleviate the problematic tuning of gain, achieving fast convergence of estimations to the actual values of parameters, as well as less oscillations. In contrast to batch estimators, the ability of the Hopfield estimator to track time varying parameters is shown. The only requisite of the proposed methods is the knowledge of a bounded region where the actual values of parameters remain. The organization of this paper is as follows: The well-known gradient method is briefly reviewed in Section 2, together with its usual application to linear in the parameters systems. In Section 3, after describing the application of Hopfield neural networks to optimization, parameter estimation with Hopfield networks is presented. The efficiency of the method is assessed by simulations in Section 4. Finally, conclusions are summarized in Section Gradient Estimation In gray box identification, a system model is known but it includes some unknown, uncertain and/or time-varying parameters. In the simplest and most usual case, the model is in the LIP form ẋ = A (x,u) π. A remark on notation is noticeable: the formulation will be considerably simplified if we consider that the parameter vector π = θ n + θ comprises an expected or nominal known part θ n, as well as the deviation from the nominal θ. The estimation method is designed to provide an estimation ˆθ of the deviation. Thus, we rewrite the system model as: y =ẋ = A(x(t), u(t))(θ n + θ(t)) (1) where y is the output, θ is the unknown possibly time-dependant deviation from the nominal values and A is a matrix that depends on the input u and the state x. Both y and A are assumed to be physically measurable. Identification is accomplished by producing an estimation ˆθ(t) that is intended to continuously minimize the estimation error θ = ˆθ θ. An additional important objective
4 146 M. ATENCIA ET AL. is to reduce the output prediction error that, in the linear in the parameters form, reduces to e = A(θ n + ˆθ) y = A θ. Equivalently, the square of the norm of the prediction error can be chosen as the target function: e 2 = e e = (A θ) (A θ)= θ A A θ (2) A common technique for on-line estimation is the gradient method that, due to its simplicity, presents some advantages over least mean squares algorithms in estimation of time-varying parameters. In the gradient method, the estimation is continuously modified in the direction that best minimizes the prediction error, i.e. the direction of the gradient of e 2 as a function of the estimation vector ˆθ, thus leading to the dynamical equation of the estimator: ˆθ = k (e e)= 2kA e = 2kA [A(θ n + ˆθ) y] = ( 2kA A) ˆθ 2kA (Aθ n y) (3) where the fact e/ ˆθ = e/ θ = A has been used, and k is a design parameter, which must be critically chosen as a trade-off between small values slow convergence and large values oscillations. The last equality emphasizes the fact that the gradient method leads to a linear differential equation, when applied to an LIP system. The asymptotical convergence to zero of the prediction error can be proved if A and y are constant, but the result is also valid as long as A and y change slowly [17], which we assume in the sequel. However, the estimation error may be large even though the prediction error vanishes. This is due to the possible existence of several estimation values that result in zero prediction error, apart from the obvious actual value ˆθ =θ, whereas if the input signal is rich enough then A and y are so complex that the only value of ˆθ that leads to ˆθ = 0 is the actual parameter. This richness of the input signal, called persistent excitation in the control literature [17], occurs when the input signal contains a sufficient number of harmonics, and it will also be assumed. 3. Estimation with Hopfield and Tank Networks The neural network paradigm comprises a variety of computational models, among which Hopfield networks [8, 9] are feedback systems for which a Lyapunov function has been found. This feature guarantees the stability of the network, which can then be useful for the solution of two important problems: associative memory and optimization. In the Abe formulation [1], the evolution of a continuous Hopfield network is governed by the following system of differential equations: µ = net(s); s = tanh(µ/β) (4)
5 HOPFIELD NETWORKS FOR PARAMETRIC IDENTIFICATION 147 where β is a parameter that controls the slope of the hyperbolic tangent function and the net term is a linear function of the state vector: n net i = w ij s i s j I i (5) j=1 The computational ability of Hopfield networks stems from the fact that they are stable dynamical systems, which is proved by the existence of a Lyapunov function: V = 1 2 n n n w ij s i s j + I i s i (6) i=1 j=1 i=1 A significant advantage of the Abe formulation is the multinomial form of the Lyapunov function, as shown in equation (6). In particular, the Lyapunov function has the same structure of the multilinear target function of combinatorial optimization problems. This is in contrast to the original formulation by Hopfield [9] and subsequent generalizations [4], where the presence of an integral term introduces additional difficulties. Note, however, that when neurons are updated synchronously at discrete time intervals, the Lyapunov function is identical to that of the Abe formulation [5,6], see also [7] and references therein. In the sequel, we stick to the Abe formulation in continuous time. For this system, the fact that the function V of equation (6) is indeed a Lyapunov function stems from the identity V / s i = net i, which leads to V = n i=1 V s i ṡ i = n i=1 V s i ds i dµ i µ i = n i=1 1 β (net i) 2 tanh (µ i /β) 0 (7) and the other conditions for V being a Lyapunov function are trivially satisfied (see [10] for a discussion on Hopfield dynamics). The parameters w and I are called weights and biases, respectively. A remarkable property of Hopfield dynamics, is that states remain in a bounded region, namely the hypercube s i [ 1, 1], due to the presence of the bounded function tanh. Therefore, the intended solution should belong to this hypercube, otherwise it will never be attained. The application of the Hopfield model to optimization is a consequence of its stability: since the network seeks a minimum of its Lyapunov function, it can be regarded as a minimization method, as long as the target function can be identified with the Lyapunov function. The parameters w, I are then obtained from the numeric values in the target function [19]. In order to perform system identification, the squared norm of the prediction error, as defined in equation (2), is chosen as the target function. However, since A and y are time-varying, the known stability proofs are no longer valid, hence the existence of a Lyapunov function cannot be taken for granted. Assume that time is frozen so that A c =A(x(t), u(t))
6 148 M. ATENCIA ET AL. is a constant matrix and y c = y(t) is a constant vector and consider the Lyapunov function candidate V = 1/2 e e that, after some algebra, results in: V = 1 2 ˆθ A c A c ˆθ + ˆθ (A c A cθ n A c y c) + V 1 (8) where V 1 does not depend on ˆθ so it can be neglected without changing the position of the minima of V. Therefore, this equation is structurally identical to the standard Lyapunov function of a Hopfield network equation (6) whose states represent the estimation, as long as the weight matrix and the bias vector are appropriately defined: W = A c A c; I = A c A c θ n A c y c (9) Therefore, under the assumption of A and y being constant, the Abe formulation of the Hopfield model provides an estimation that minimizes the prediction error. In the next section, simulation results are presented to show that the assumption that A and y are constants may be relaxed and the prediction error produced by the resulting dynamical system is still reduced. With regard to the exact stability analysis, it has been shown the stability of slowly varying dynamical systems [21], but this result is not directly applicable to the neural estimator, since it first requires proving the exponential stability of every frozen system. Instead, the neural network can be analysed by means of the Barbalat s lemma [17], that guarantees the stability of a system if a Lyapunov function is found, whose temporal derivative is uniformly continuous. Further difficulties arise when the parameter vector θ is time-varying, since then only boundedness of the solution can be proved [11]. The technical details of the stability analysis will be published in a separate paper [3]. It is remarkable that the only difference between the obtained Hopfield dynamics and the already known gradient method given by equation (3) is the presence of the nonlinear sigmoid-like function tanh. A requirement for the application of this methodology is the knowledge of a bounded region around θ n, because the network states can not abandon the hypercube s i [ 1, 1]. However, this region is not required to coincide with the hypercube, since it could be mapped into the hypercube by an appropriate scaling. The proposed method is simulated and its performance is compared to the gradient method in the following section. 4. Simulation Results We will show the efficiency of the proposed method by simulating an idealized single link manipulator (Figure 1), modelled by the following equation: y =ẍ = g l sin x v ml 2 ẋ + 1 ml 2 u (10)
7 HOPFIELD NETWORKS FOR PARAMETRIC IDENTIFICATION 149 Figure 1. Single link manipulator. where x is the angle between the manipulator and the vertical, ẋ is its angular velocity, g is the gravity, l is the manipulator arm length, v is the friction coefficient, m is the mass that is being transported at the arm extreme and u is the torque produced by a motor, which can be regarded as the input or control signal. If the gravity is assumed constant, the physical parameters are l, v and m. In order to cast the model into the linear in the parameters form, we define the formal parameter vector π = (θ n + θ) = ( g/l, v/ml 2, 1/ml 2 ). Then, the matrix A can be defined as A = (sin x,ẋ,u) so that equation (1) holds. The control signal u is defined as a sum of three sinusoids displaced according to the Schroeder phase [12], to achieve persistent excitation without excessive amplitude. The setting of the simulated experiment is as follows: At the beginning, the deviation of the actual parameters from the nominal values is θ(t = 0) = ( ). Since the initial estimation vector ˆθ is null, the estimation error at t = 0 is considerable, in order to determine if the methods are able to converge to the correct values or they get stuck into wrong estimations. Besides, the system is subject to a variable load, the transported mass abruptly changing at t = 15 s. The gains k = 10 and k = 100 have been chosen for the gradient estimator, while no special attention has been paid to the selection of an optimal value for the parameter β of the Hopfield network. The performance of the Hopfield network is graphically compared to the gradient method. In particular, the error in the first component θ 1 is plotted in Figure 2, where we observe a short transient during which all three estimators oscillate. Then, the Hopfield network suddenly converges to the actual value, the gradient estimator with k=100 oscillates wildly and the gradient with k = 10 also converges, but its convergence is so slow that it has not reached the actual value when the parameter change at t = 15 s. occurs. After the load change, both gradient estimations are distorted, while the network response is practically unaffected. It is clear the problematic trade-off of the gradient method between oscillatory answer (k = 100) and slow convergence (k = 10), while the neural network converges quickly and its oscillations are appropriately damped. We also compare the integral of the prediction error of the Hopfield estimator to that of the gradient method in Figure 3. The Hopfield network and the gradient with k = 100 both produce an accurate prediction, but the gradient with k = 10 accumulates significant error, mainly after the load change at t = 15 s.
8 150 M. ATENCIA ET AL Gradient (k=10) Gradient (k=100) Hopfield Figure 2. Estimation error for θ Gradient (k=10) Gradient (k=100) Hopfield Figure 3. Integral of prediction error. 5. Conclusions In this work, the optimization methodology of Hopfield and Tank is applied to parameter estimation, in the context of system identification. When restricting to linear in the parameters systems, simulation results show that this technique presents faster convergence as well as oscillation reduction in contrast to the known gradient method. Furthermore, no parameter exists that must be adjusted ad hoc for each problem. The main requirement for the application of the method is that the parameter values must belong to a known compact interval, which is a mild assumption in robotic systems, where the physical meaning of parameters provide hints of their values. The positive aspect is that there is no need
9 HOPFIELD NETWORKS FOR PARAMETRIC IDENTIFICATION 151 for complicated algorithms, such as normalization or projection, in order to keep the estimations bounded. Simulations have been performed for a simplified example and the proposed method outperforms two instances with different gain choices of the known gradient method. Currently, we are involved in deeper research of the neural estimator, developing several aspects, such as: The application of the neural estimator to more realistic models, such as robots with flexible links. The theoretical study of the convergence and stability of the Hopfield estimator. In the case of sudden parameter changes, the simulations here presented suggest that convergence to the exact parameters is feasible. However, when parameters vary continuously with time, only the convergence of the estimation error towards a bounded neighbourhood can be proved. The extension of the method to non-lip systems, by means of higher-order Hopfield networks [16, 10]. In this case, a general nonlinear function must be approximated, e.g. by its Taylor polynomial, so that it matches the Lyapunov function of higher-order Hopfield networks, which is multilinear. The inclusion of the estimator into an adaptive control system and the stability of the resulting closed-loop system. To summarize, the estimation method based upon Hopfield networks presents favourable properties: it performs on-line estimation and it is able to deal with time-varying parameters. In particular, the neural estimator is well suited to systems in the field of robotics, where physical models are available, and inputs and states can be accurately measured. Acknowledgments This work has been partially supported by the Spanish Ministerio de Ciencia y Tecnología (MCYT), Project No. TIC The careful reading and useful suggestions of the reviewers are gratefully acknowledged. References 1. Abe, S.: Theories on the Hopfield Neural Networks. In: Proceedings IEE International Joint Conference on Neural Networks, Vol. I. (1989) pp Atencia, M. A., Joya, G. and Sandoval, F.: Modelling the HIV-AIDS Cuban Epidemics with Hopfield Neural Networks. In: J. Mira and J. R. Álvarez (eds.): Artificial Neural Nets. Problem Solving Methods. Lecture Notes on Computer Science (IWANN 2003), Vol (2003) pp Atencia, M. A., Joya, G. and Sandoval, F.: Parametric identification of robotic systems with stable time-varying Hopfield networks, Neural Computing & Applications, 13 (2004), In press.
10 152 M. ATENCIA ET AL. 4. Cohen, M. and Grossberg, S.: Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Transaction on Systems, Man and Cybernetics, 13(5) (1983), Golden, R. M.: The brain-state-in-a-box neural model is a gradient descent algorithm, Journal of Mathematical Psychology, 30(1) (1986), Golden, R. M.: Stability and optimization analyses of the generalized Brain-State-in-a- Box neural network model, Journal of Mathematical Psychology 37(2) (1993), Golden, R. M.: Mathematical Methods for Neural Network Analysis and Design. Cambridge, MA: MIT Press, Hopfield, J.: Neural Networks and physical systems with emergent collective computational abilities, Proceedings of the National Academy of Scinces USA, 79 (1982), Hopfield, J.: Neurons with graded response have collective computational properties like those of two-state neurons, Proceedings of the National Academy of Sciences USA, 81 (1984), Joya, G., Atencia, M. A. and Sandoval, F.: Hopfield neural networks for optimization: study of the different dynamics, Neurocomputing, 43(1 4) (2002), Khalil, H. K.: Nonlinear Systems. New York: Prentice Hall, Ljung, L.: System Identification. Theory for the User. New York: Prentice Hall, Marino, R.: Adaptive control of nonlinear systems: basic results and applications, Annual Reviews in Control, 21 (1997), Marino, R. and Tomei, P.: Global adaptive output-feedback control of nonlinear systems, Part II: nonlinear parameterization, IEEE Transaction on Automatic Control, 38(1) (1993), Pedroso-Rodríguez, L. M., Marrero, A. and Arazoza, H.: Nonlinear parametric model identification using genetic algorithms. In: J. Mira and J. R. Álvarez (eds.): Artificial Neural Nets. Problem Solving Methods. Lecture Notes on Computer Science (IWANN 2003), Vol. 2687, (2003), Samad, T. and Harper, P.: High-order Hopfield and Tank optimization networks, Parallel Computing, 16 (1990), Slotine, J.-J. and Li, W.: Applied Nonlinear Control. New York: Prentice Hall, Spong, M. W. and Vidyasagar, M.: Robot Dynamics and Control. New York: John Wiley & Sons, Tank, D. and Hopfield, J.: Neural computation of decisions in optimization problems, Biological Cybernetics, 52 (1985), Unbehauen, H.: 1996, Some new trends in identification and modeling of nonlinear dynamical systems, Applied Mathematics and Computation, 78 (1985), Vidyasagar, M.: Nonlinear Systems Analysis. New York: Prentice Hall, 1993.
State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems
State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems Mehdi Tavan, Kamel Sabahi, and Saeid Hoseinzadeh Abstract This paper addresses the problem of state and
More informationThe Rationale for Second Level Adaptation
The Rationale for Second Level Adaptation Kumpati S. Narendra, Yu Wang and Wei Chen Center for Systems Science, Yale University arxiv:1510.04989v1 [cs.sy] 16 Oct 2015 Abstract Recently, a new approach
More informationAdaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties
Australian Journal of Basic and Applied Sciences, 3(1): 308-322, 2009 ISSN 1991-8178 Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties M.R.Soltanpour, M.M.Fateh
More informationA NONLINEAR TRANSFORMATION APPROACH TO GLOBAL ADAPTIVE OUTPUT FEEDBACK CONTROL OF 3RD-ORDER UNCERTAIN NONLINEAR SYSTEMS
Copyright 00 IFAC 15th Triennial World Congress, Barcelona, Spain A NONLINEAR TRANSFORMATION APPROACH TO GLOBAL ADAPTIVE OUTPUT FEEDBACK CONTROL OF RD-ORDER UNCERTAIN NONLINEAR SYSTEMS Choon-Ki Ahn, Beom-Soo
More informationExponential Controller for Robot Manipulators
Exponential Controller for Robot Manipulators Fernando Reyes Benemérita Universidad Autónoma de Puebla Grupo de Robótica de la Facultad de Ciencias de la Electrónica Apartado Postal 542, Puebla 7200, México
More informationI. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching
I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 1 Adaptive Control Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 2 Outline
More informationMODELING AND IDENTIFICATION OF A MECHANICAL INDUSTRIAL MANIPULATOR 1
Copyright 22 IFAC 15th Triennial World Congress, Barcelona, Spain MODELING AND IDENTIFICATION OF A MECHANICAL INDUSTRIAL MANIPULATOR 1 M. Norrlöf F. Tjärnström M. Östring M. Aberger Department of Electrical
More informationA Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control
A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control Fernando A. C. C. Fontes 1 and Lalo Magni 2 1 Officina Mathematica, Departamento de Matemática para a Ciência e
More informationNeural Network Control of Robot Manipulators and Nonlinear Systems
Neural Network Control of Robot Manipulators and Nonlinear Systems F.L. LEWIS Automation and Robotics Research Institute The University of Texas at Arlington S. JAG ANNATHAN Systems and Controls Research
More informationAdaptive fuzzy observer and robust controller for a 2-DOF robot arm
Adaptive fuzzy observer and robust controller for a -DOF robot arm S. Bindiganavile Nagesh, Zs. Lendek, A.A. Khalate, R. Babuška Delft University of Technology, Mekelweg, 8 CD Delft, The Netherlands (email:
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationControl of the Inertia Wheel Pendulum by Bounded Torques
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 ThC6.5 Control of the Inertia Wheel Pendulum by Bounded Torques Victor
More informationAdaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators
IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 9, NO. 2, APRIL 2001 315 Adaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators Hugang Han, Chun-Yi Su, Yury Stepanenko
More informationNonlinear System Analysis
Nonlinear System Analysis Lyapunov Based Approach Lecture 4 Module 1 Dr. Laxmidhar Behera Department of Electrical Engineering, Indian Institute of Technology, Kanpur. January 4, 2003 Intelligent Control
More informationAFAULT diagnosis procedure is typically divided into three
576 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 4, APRIL 2002 A Robust Detection and Isolation Scheme for Abrupt and Incipient Faults in Nonlinear Systems Xiaodong Zhang, Marios M. Polycarpou,
More informationCONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT
Journal of Computer Science and Cybernetics, V.31, N.3 (2015), 255 265 DOI: 10.15625/1813-9663/31/3/6127 CONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT NGUYEN TIEN KIEM
More informationL 1 Adaptive Controller for a Class of Systems with Unknown
28 American Control Conference Westin Seattle Hotel, Seattle, Washington, USA June -3, 28 FrA4.2 L Adaptive Controller for a Class of Systems with Unknown Nonlinearities: Part I Chengyu Cao and Naira Hovakimyan
More informationSet-based adaptive estimation for a class of nonlinear systems with time-varying parameters
Preprints of the 8th IFAC Symposium on Advanced Control of Chemical Processes The International Federation of Automatic Control Furama Riverfront, Singapore, July -3, Set-based adaptive estimation for
More informationModeling and Experimentation: Compound Pendulum
Modeling and Experimentation: Compound Pendulum Prof. R.G. Longoria Department of Mechanical Engineering The University of Texas at Austin Fall 2014 Overview This lab focuses on developing a mathematical
More informationNonlinear Identification of Backlash in Robot Transmissions
Nonlinear Identification of Backlash in Robot Transmissions G. Hovland, S. Hanssen, S. Moberg, T. Brogårdh, S. Gunnarsson, M. Isaksson ABB Corporate Research, Control Systems Group, Switzerland ABB Automation
More informationEvent-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems
Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Pavankumar Tallapragada Nikhil Chopra Department of Mechanical Engineering, University of Maryland, College Park, 2742 MD,
More informationVirtual Reference Feedback Tuning for non-linear systems
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 25 Seville, Spain, December 2-5, 25 ThA9.6 Virtual Reference Feedback Tuning for non-linear systems
More informationIN recent years, controller design for systems having complex
818 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL 29, NO 6, DECEMBER 1999 Adaptive Neural Network Control of Nonlinear Systems by State and Output Feedback S S Ge, Member,
More informationNeural Nets and Symbolic Reasoning Hopfield Networks
Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks
More informationADAPTIVE EXTREMUM SEEKING CONTROL OF CONTINUOUS STIRRED TANK BIOREACTORS 1
ADAPTIVE EXTREMUM SEEKING CONTROL OF CONTINUOUS STIRRED TANK BIOREACTORS M. Guay, D. Dochain M. Perrier Department of Chemical Engineering, Queen s University, Kingston, Ontario, Canada K7L 3N6 CESAME,
More informationA Systematic Approach to Extremum Seeking Based on Parameter Estimation
49th IEEE Conference on Decision and Control December 15-17, 21 Hilton Atlanta Hotel, Atlanta, GA, USA A Systematic Approach to Extremum Seeking Based on Parameter Estimation Dragan Nešić, Alireza Mohammadi
More informationControl of Mobile Robots
Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and
More informationOptimization of Phase-Locked Loops With Guaranteed Stability. C. M. Kwan, H. Xu, C. Lin, and L. Haynes
Optimization of Phase-Locked Loops With Guaranteed Stability C. M. Kwan, H. Xu, C. Lin, and L. Haynes ntelligent Automation nc. 2 Research Place Suite 202 Rockville, MD 20850 phone: (301) - 590-3155, fax:
More informationOn Identification of Cascade Systems 1
On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se
More informationAdaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance
International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of
More informationModel Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion
Proceedings of the 11th WSEAS International Conference on SSTEMS Agios ikolaos Crete Island Greece July 23-25 27 38 Model Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion j.garus@amw.gdynia.pl
More information458 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 16, NO. 3, MAY 2008
458 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL 16, NO 3, MAY 2008 Brief Papers Adaptive Control for Nonlinearly Parameterized Uncertainties in Robot Manipulators N V Q Hung, Member, IEEE, H D
More informationEN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015
EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions
More informationA Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems
53rd IEEE Conference on Decision and Control December 15-17, 2014. Los Angeles, California, USA A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems Seyed Hossein Mousavi 1,
More information1.1 OBJECTIVE AND CONTENTS OF THE BOOK
1 Introduction 1.1 OBJECTIVE AND CONTENTS OF THE BOOK Hysteresis is a nonlinear phenomenon exhibited by systems stemming from various science and engineering areas: under a low-frequency periodic excitation,
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)
More informationNonlinear Tracking Control of Underactuated Surface Vessel
American Control Conference June -. Portland OR USA FrB. Nonlinear Tracking Control of Underactuated Surface Vessel Wenjie Dong and Yi Guo Abstract We consider in this paper the tracking control problem
More informationSelf-Tuning Control for Synchronous Machine Stabilization
http://dx.doi.org/.5755/j.eee.2.4.2773 ELEKTRONIKA IR ELEKTROTECHNIKA, ISSN 392-25, VOL. 2, NO. 4, 25 Self-Tuning Control for Synchronous Machine Stabilization Jozef Ritonja Faculty of Electrical Engineering
More informationIntroduction. 1.1 Historical Overview. Chapter 1
Chapter 1 Introduction 1.1 Historical Overview Research in adaptive control was motivated by the design of autopilots for highly agile aircraft that need to operate at a wide range of speeds and altitudes,
More informationUnit quaternion observer based attitude stabilization of a rigid spacecraft without velocity measurement
Proceedings of the 45th IEEE Conference on Decision & Control Manchester Grand Hyatt Hotel San Diego, CA, USA, December 3-5, 6 Unit quaternion observer based attitude stabilization of a rigid spacecraft
More informationDesign Artificial Nonlinear Controller Based on Computed Torque like Controller with Tunable Gain
World Applied Sciences Journal 14 (9): 1306-1312, 2011 ISSN 1818-4952 IDOSI Publications, 2011 Design Artificial Nonlinear Controller Based on Computed Torque like Controller with Tunable Gain Samira Soltani
More informationAdditive resonances of a controlled van der Pol-Duffing oscillator
Additive resonances of a controlled van der Pol-Duffing oscillator This paper has been published in Journal of Sound and Vibration vol. 5 issue - 8 pp.-. J.C. Ji N. Zhang Faculty of Engineering University
More informationAn Approach of Robust Iterative Learning Control for Uncertain Systems
,,, 323 E-mail: mxsun@zjut.edu.cn :, Lyapunov( ),,.,,,.,,. :,,, An Approach of Robust Iterative Learning Control for Uncertain Systems Mingxuan Sun, Chaonan Jiang, Yanwei Li College of Information Engineering,
More informationIN THIS PAPER, we consider a class of continuous-time recurrent
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds
More informationStable Limit Cycle Generation for Underactuated Mechanical Systems, Application: Inertia Wheel Inverted Pendulum
Stable Limit Cycle Generation for Underactuated Mechanical Systems, Application: Inertia Wheel Inverted Pendulum Sébastien Andary Ahmed Chemori Sébastien Krut LIRMM, Univ. Montpellier - CNRS, 6, rue Ada
More informationELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems
ELEC4631 s Lecture 2: Dynamic Control Systems 7 March 2011 Overview of dynamic control systems Goals of Controller design Autonomous dynamic systems Linear Multi-input multi-output (MIMO) systems Bat flight
More informationAdaptive estimation in nonlinearly parameterized nonlinear dynamical systems
2 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July, 2 Adaptive estimation in nonlinearly parameterized nonlinear dynamical systems Veronica Adetola, Devon Lehrer and
More informationStability Analysis of the Simplest Takagi-Sugeno Fuzzy Control System Using Popov Criterion
Stability Analysis of the Simplest Takagi-Sugeno Fuzzy Control System Using Popov Criterion Xiaojun Ban, X. Z. Gao, Xianlin Huang 3, and Hang Yin 4 Department of Control Theory and Engineering, Harbin
More informationMCE/EEC 647/747: Robot Dynamics and Control. Lecture 12: Multivariable Control of Robotic Manipulators Part II
MCE/EEC 647/747: Robot Dynamics and Control Lecture 12: Multivariable Control of Robotic Manipulators Part II Reading: SHV Ch.8 Mechanical Engineering Hanz Richter, PhD MCE647 p.1/14 Robust vs. Adaptive
More informationDesign and Analysis of Maximum Hopfield Networks
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH 2001 329 Design and Analysis of Maximum Hopfield Networks Gloria Galán-Marín and José Muñoz-Pérez Abstract Since McCulloch and Pitts presented
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationLearning Gaussian Process Models from Uncertain Data
Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada
More informationNeural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot
Vol.3 No., 27 مجلد 3 العدد 27 Neural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot Abdul-Basset A. AL-Hussein Electrical Engineering Department Basrah
More informationCSC321 Lecture 7: Optimization
CSC321 Lecture 7: Optimization Roger Grosse Roger Grosse CSC321 Lecture 7: Optimization 1 / 25 Overview We ve talked a lot about how to compute gradients. What do we actually do with them? Today s lecture:
More informationKINETIC ENERGY SHAPING IN THE INVERTED PENDULUM
KINETIC ENERGY SHAPING IN THE INVERTED PENDULUM J. Aracil J.A. Acosta F. Gordillo Escuela Superior de Ingenieros Universidad de Sevilla Camino de los Descubrimientos s/n 49 - Sevilla, Spain email:{aracil,
More informationPrediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate
www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation
More information4. Complex Oscillations
4. Complex Oscillations The most common use of complex numbers in physics is for analyzing oscillations and waves. We will illustrate this with a simple but crucially important model, the damped harmonic
More informationOn-line Learning of Robot Arm Impedance Using Neural Networks
On-line Learning of Robot Arm Impedance Using Neural Networks Yoshiyuki Tanaka Graduate School of Engineering, Hiroshima University, Higashi-hiroshima, 739-857, JAPAN Email: ytanaka@bsys.hiroshima-u.ac.jp
More informationEN Nonlinear Control and Planning in Robotics Lecture 10: Lyapunov Redesign and Robust Backstepping April 6, 2015
EN530.678 Nonlinear Control and Planning in Robotics Lecture 10: Lyapunov Redesign and Robust Backstepping April 6, 2015 Prof: Marin Kobilarov 1 Uncertainty and Lyapunov Redesign Consider the system [1]
More informationCSC321 Lecture 5: Multilayer Perceptrons
CSC321 Lecture 5: Multilayer Perceptrons Roger Grosse Roger Grosse CSC321 Lecture 5: Multilayer Perceptrons 1 / 21 Overview Recall the simple neuron-like unit: y output output bias i'th weight w 1 w2 w3
More informationTrigonometric Saturated Controller for Robot Manipulators
Trigonometric Saturated Controller for Robot Manipulators FERNANDO REYES, JORGE BARAHONA AND EDUARDO ESPINOSA Grupo de Robótica de la Facultad de Ciencias de la Electrónica Benemérita Universidad Autónoma
More informationNonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto
Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Nonlinear System Identification Given a data set Z N = {y(k),
More informationLecture 9 Nonlinear Control Design
Lecture 9 Nonlinear Control Design Exact-linearization Lyapunov-based design Lab 2 Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.2] and [Glad-Ljung,ch.17] Course Outline
More informationExercises Lecture 15
AM1 Mathematical Analysis 1 Oct. 011 Feb. 01 Date: January 7 Exercises Lecture 15 Harmonic Oscillators In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium
More informationDamped harmonic motion
Damped harmonic motion March 3, 016 Harmonic motion is studied in the presence of a damping force proportional to the velocity. The complex method is introduced, and the different cases of under-damping,
More informationarxiv: v1 [cond-mat.stat-mech] 6 Mar 2008
CD2dBS-v2 Convergence dynamics of 2-dimensional isotropic and anisotropic Bak-Sneppen models Burhan Bakar and Ugur Tirnakli Department of Physics, Faculty of Science, Ege University, 35100 Izmir, Turkey
More informationModified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations
Engineering Letters, 14:1, EL_14_1_3 (Advance online publication: 1 February 007) Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations {Deepak Mishra, Prem K. Kalra} Abstract
More informationNonlinear PD Controllers with Gravity Compensation for Robot Manipulators
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 4, No Sofia 04 Print ISSN: 3-970; Online ISSN: 34-408 DOI: 0.478/cait-04-00 Nonlinear PD Controllers with Gravity Compensation
More informationControl of Electromechanical Systems
Control of Electromechanical Systems November 3, 27 Exercise Consider the feedback control scheme of the motor speed ω in Fig., where the torque actuation includes a time constant τ A =. s and a disturbance
More informationH 2 Adaptive Control. Tansel Yucelen, Anthony J. Calise, and Rajeev Chandramohan. WeA03.4
1 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July, 1 WeA3. H Adaptive Control Tansel Yucelen, Anthony J. Calise, and Rajeev Chandramohan Abstract Model reference adaptive
More informationUsing Neural Networks for Identification and Control of Systems
Using Neural Networks for Identification and Control of Systems Jhonatam Cordeiro Department of Industrial and Systems Engineering North Carolina A&T State University, Greensboro, NC 27411 jcrodrig@aggies.ncat.edu
More informationIntelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera
Intelligent Control Module I- Neural Networks Lecture 7 Adaptive Learning Rate Laxmidhar Behera Department of Electrical Engineering Indian Institute of Technology, Kanpur Recurrent Networks p.1/40 Subjects
More informationVideo 8.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar
Video 8.1 Vijay Kumar 1 Definitions State State equations Equilibrium 2 Stability Stable Unstable Neutrally (Critically) Stable 3 Stability Translate the origin to x e x(t) =0 is stable (Lyapunov stable)
More informationGaussian Estimation under Attack Uncertainty
Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary
More informationDisplacement at very low frequencies produces very low accelerations since:
SEISMOLOGY The ability to do earthquake location and calculate magnitude immediately brings us into two basic requirement of instrumentation: Keeping accurate time and determining the frequency dependent
More informationCoordinated Path Following for Mobile Robots
Coordinated Path Following for Mobile Robots Kiattisin Kanjanawanishkul, Marius Hofmeister, and Andreas Zell University of Tübingen, Department of Computer Science, Sand 1, 7276 Tübingen Abstract. A control
More informationBIOCOMP 11 Stability Analysis of Hybrid Stochastic Gene Regulatory Networks
BIOCOMP 11 Stability Analysis of Hybrid Stochastic Gene Regulatory Networks Anke Meyer-Baese, Claudia Plant a, Jan Krumsiek, Fabian Theis b, Marc R. Emmett c and Charles A. Conrad d a Department of Scientific
More informationGAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL
GAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL 1 KHALED M. HELAL, 2 MOSTAFA R.A. ATIA, 3 MOHAMED I. ABU EL-SEBAH 1, 2 Mechanical Engineering Department ARAB ACADEMY
More informationArtificial Neural Networks Examination, June 2005
Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either
More informationNeural Networks. Advanced data-mining. Yongdai Kim. Department of Statistics, Seoul National University, South Korea
Neural Networks Advanced data-mining Yongdai Kim Department of Statistics, Seoul National University, South Korea What is Neural Networks? One of supervised learning method using one or more hidden layer.
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationNavigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop
Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Jan Maximilian Montenbruck, Mathias Bürger, Frank Allgöwer Abstract We study backstepping controllers
More informationDirect Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions
Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies
More informationCase Study: The Pelican Prototype Robot
5 Case Study: The Pelican Prototype Robot The purpose of this chapter is twofold: first, to present in detail the model of the experimental robot arm of the Robotics lab. from the CICESE Research Center,
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationA STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS
A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont
More informationContraction Based Adaptive Control of a Class of Nonlinear Systems
9 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June -, 9 WeB4.5 Contraction Based Adaptive Control of a Class of Nonlinear Systems B. B. Sharma and I. N. Kar, Member IEEE Abstract
More informationA new large projection operator for the redundancy framework
21 IEEE International Conference on Robotics and Automation Anchorage Convention District May 3-8, 21, Anchorage, Alaska, USA A new large projection operator for the redundancy framework Mohammed Marey
More information13 Path Planning Cubic Path P 2 P 1. θ 2
13 Path Planning Path planning includes three tasks: 1 Defining a geometric curve for the end-effector between two points. 2 Defining a rotational motion between two orientations. 3 Defining a time function
More informationChaos suppression of uncertain gyros in a given finite time
Chin. Phys. B Vol. 1, No. 11 1 1155 Chaos suppression of uncertain gyros in a given finite time Mohammad Pourmahmood Aghababa a and Hasan Pourmahmood Aghababa bc a Electrical Engineering Department, Urmia
More informationCSC321 Lecture 8: Optimization
CSC321 Lecture 8: Optimization Roger Grosse Roger Grosse CSC321 Lecture 8: Optimization 1 / 26 Overview We ve talked a lot about how to compute gradients. What do we actually do with them? Today s lecture:
More informationfraction dt (0 < dt 1) from its present value to the goal net value: Net y (s) = Net y (s-1) + dt (GoalNet y (s) - Net y (s-1)) (2)
The Robustness of Relaxation Rates in Constraint Satisfaction Networks D. Randall Wilson Dan Ventura Brian Moncur fonix corporation WilsonR@fonix.com Tony R. Martinez Computer Science Department Brigham
More informationLecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization
Lecture 9 Nonlinear Control Design Course Outline Eact-linearization Lyapunov-based design Lab Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.] and [Glad-Ljung,ch.17] Lecture
More informationTemporal Backpropagation for FIR Neural Networks
Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static
More informationArtificial Neural Networks Examination, June 2004
Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More information1 Lyapunov theory of stability
M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability
More information