1995 American Control Conference. June 21-23, Seattle, Washington. Using Neural Networks. Stanford University.

Size: px
Start display at page:

Download "1995 American Control Conference. June 21-23, Seattle, Washington. Using Neural Networks. Stanford University."

Transcription

1 99 American Control Conference. June -, 99. Seattle, Washington Recongurable Control of a Free-Flying Space Using Neural s Edward Wilson Stephen M. Rock Stanford University Aerospace ics Laboratory Stanford, California 9 ed,rock@sun-valley.stanford.edu Abstract An experimental demonstration of a new, recongurable neural-network-based adaptive control system is presented. The system under control is a laboratory model of a free-ying space robot whose position and attitude are controlled using eight on-o air thrusters. The neural-network controller adapts in real-time to account for multiple destabilizing thruster failures. The adaptive control system is similar in structure to a conventional indirect adaptive control system. While a system identication process builds a model of the robot, a neural controller is trained concurrently using backpropagation to optimize performance using this model. The active controller is updated every few seconds, yielding quick adaptation. Stability is restored within seconds, system identication is complete within seconds, and near-optimal performance is achieved within minutes.. Introduction The nonlinear, parallel, and adaptive capabilities of neural networks make them promising for control applications. Neural networks derive their advantage in solving very complex problems from the emergent properties that come with the massive interconnection of simple processing units. With good training techniques, the networks are capable of implementing very complex behaviors. Numerous examples in the literature demonstrate the potential of neural-network control []. There are three important issues which often arise in a realworld control application that have not been eectively addressed in the NN literature, however:. A priori knowledge is often available in the form of models of the system's key components and a preliminary control design (e.g., provided Ph.D. Candidate, Department of Mechanical Engineering. Research supported by NASA and AFOSR. Associate Professor, Department of Aeronautics and Astronautics. by \conventional" control design techniques). Is it possible to use this a priori information to improve greatly the performance the neural network can then enable?. Many control applications involve the use of discrete-valued devices. For example, thrusters often operate \on-o" rather than with analogvalued outputs. This presents a problem for backpropagation learning, since these discretevalued functions are not continuously dierentiable. Is it possible to modify backpropagation to accommodate the discrete-valued functions?. Speed of learning is often important in realtime control applications. Can backpropagation learning be made fast enough to be feasible for rapid on-line adaptation? In addition to the development of a recongurable thruster control system, the goal of the work reported here was to develop extensions to neural-network theory that would address each of these issues. These developments are reported in the context of the robot control application that follows.. Control Application The control task addressed in this research is the control of position and attitude of a free-ying space robot using on-o thrusters. Control using on-o thrusters is an important problem for real spacecraft [], and the nonlinear and adaptive capabilities of neural networks make them attractive for this application. A NN-based approximation method scales well to higher-dimensional thruster controllers, as well as providing a structure conducive to recon- gurable control. Further details regarding the robot control application are presented in [] []. The challenge presented here is to damage mechanically a number of thrusters (as in Figure ), and then have the control system autonomously and rapidly recongure itself in real time. Some thruster failures are strongly destabilizing, which places high demands on the speed of recovery.

2 The three degrees of freedom (x; y; ) of the base are controlled using eight thrusters positioned around its perimeter, as shown in Figure. Each thruster produces both a torque and net force on the robot. This coupling, combined with the on-o nature of the thrusters, substantially complicates the control task. The robot-base-control strategy developed for this system is shown in Figure. The thruster mapping task that must be performed during each sample period is to take an input vector of continuous-valued desired forces and torques, [F xdes;f ydes; des], and nd the optimal output vector of discrete-valued (o, on) thruster values, [T ;T ; :::; T ]. desired state vector, Xdes X PD controller desired force vector, Fdes.9 N -. N [. N-m] Thruster Mapper (NN) thruster pattern, T [ ] Position Sensor Figure : Stanford Free-Flying Space This highly-autonomous mobile robot operates in the horizontal plane, using an air-cushion suspension to simulate the drag-free and zero-g characteristics of space. The experimental equipment, shown in Figure, is a fully-self-contained planar laboratory-prototype of an autonomous free-ying space robot complete with on-board gas, thrusters, electrical power, multiprocessor computer system, camera, wireless Ethernet data/communications link, and two cooperating manipulators. It exhibits nearly frictionless motion as it oats above a granite surface plate on a micron thick cushion of air []. Accelerometers and an angular-rate sensor sense base motions. Nominal Configuration 6 7 After Multiple Failures 6 7 Figure : Example failure mode Magnitude and direction of each of the eight thrusters is shown. Thruster failures were simulated mechanically with weaker thrusters and and 9 elbows. Some elbows destabilize the robot by changing the sign of the thrust in the direction. Figure : -Base Control The PD control module treats the thrusters as continuous actuators. The thruster mapper must nd the thruster pattern that will produce a force closest to that requested by the base-control module. In [], a conventional approach is presented that solves this problem by performing an exhaustive search atevery sample period. With eight bi-level thrusters, 6 possible thruster combinations exist. By clever use of symmetries, this search space was reduced to, making the problem tractable. Unfortunately, this solution method does not scale well for a three-dimensional robot, or when thruster failures are allowed, disrupting the symmetries. This provides the motivation for using a neural network: the neural network is used to implement a nonlinear approximation to the optimal solution { one that can be computed in real-time.. Recongurable Control System Often the most important, and sometimes the most dicult aspects of a neural-network control application are the decisions about how to structure the control system and which components are to be neural-network-based. The rst issue is to determine whether the application is one where neural networks can contribute eciently better (and cheaper) control than is achievable without them. If they can, the second issue is to determine in just which segment(s) of the control system they should be used in order to do so.

3 To determine where neural networks can contribute eectively, the control systems engineer must consider the strengths of neural networks (nonlinear, adaptive, generic, unstructured, parallelizable) as well as the costs associated with these benets (dicult to understand workings or prove stability, design is iterative, computationally complex). The cost/benet balance must be evaluated on an application by application basis. First at the system level, the system requirements and considerations of degree-ofnonlinearity, adaptation requirements, and computational complexity, etc., lead to a candidate system architecture. Then at the component level, this cost/benet analysis is repeated, leading to the decision of what sort of subsystem will be used in each segment of the control system []. User Trajectory Generator Xdes X PD controller NN train Fdes Thruster T Mapper (NN).9 N -. N [. N-m] [ ] Adaptive System robot model ID Accel. accel Sensors x,y,ψ Position Sensor Figure : Recongurable Control System This control system is based upon a conventional indirect adaptive controller, such as a self-tuning regulator. The ID block represents a recursive least squares identication of thruster strength and direction. This continually-updated model is passed to the NN training block, shown in detail in Figure. The continually-updated neural thruster mapper is loaded into the active control loop every few seconds. Applying these principles to the robot control application has resulted in a structure, shown in Figure, that is modelled after a standard control architecture known as \indirect adaptive control." \Indirect" refers to the use of sensor information to build a model of the system, and then to redesign a controller based upon the updated plant model. A recursive linear regression component was used for failure detection and identication, since identication of the thruster characteristics is a linear process. The algorithm used to obtain acceleration measurements was nonlinear, but could be derived analytically, so no neural network was used there either. A neural network was used precisely at the location where it is benecial: the thruster mapper. This is an inscrutable nonlinear function that requires adaptation, and the benets of a neural-network approach do indeed outweigh the costs. The control redesign process is therefore a backpropagation-based neuralnetwork training algorithm.. Neural Developments In development of the control system, two major neural network challenges were faced that resulted in development of a new NN architecture and training method. These developments are mentioned briey here, and in more detail in [] [] [6]... Fully-Connected Architecture To speed up the reconguration process, a general neural-network architecture was developed. This \Fully-Connected Architecture" is for feedforward neural networks that can be trained using backpropagation [7]. The FCA has many advantages over a layered architecture [] [6]. For this application, the most signicant advantage comes from the feedthrough weights. These weights provide a direct, linear connection matrix from inputs to outputs (provided sigmoids are used only on hidden units). This produces fast initial learning, and allows direct pre-programming of a linear solution calculated by some other method. This is especially important for control applications, where a large body of linear control knowledge exists that can be drawn upon to provide a good starting point. The FCA provides a seamless integration of linear and nonlinear components. In this application, a linear approximate solution is calculated very quickly based on a pseudo-inverse of a linearized plant model. Injecting this approximate linear solution into the network immediately after a failure is detected results in rapid stabilization of the robot... Backpropagation Learning for Discrete-Valued Functions Training a neural network to produce a thruster mapping based upon a model of the robot can be thought of as learning the inverse model of the robot-thruster system, as in Figure. This is a common approach, and would be relatively straightforward if not for the discrete-valued functions that represent the ono thrusters. Some modication to the learning algorithm is required to allow gradient-based optimization to be used with these non-dierentiable functions. The method, shown in Figure, relies on approximation of the discrete-valued functions with \noisy sigmoids" during training. This is a broadly-applicable

4 RUN-TIME Fdesired Neural Tcontinuous Tdiscrete Factual Model TRAINING Forward Sweep Fdes Td Fact Neural Tc Σ Σ Model N Backward Sweep Neural Tc N Td Fact Td ε Fact ε Fdes ( ) ε cost = Figure : Thruster mapping, on-line training method This is the training structure used to adapt the thruster mapper during reconguration. This process appears as the \NN train" block in Figure. The \ Model" contains the magnitude and direction of each thruster. During adaptation, this model is updated continually by the \ID" process shown in Figure. algorithm that applies to any gradient-based optimization involving discrete-valued functions. It is described in detail in [] [].. Experimental Demonstration Position and attitude of the robot base are controlled while subject to multiple, large, possibly-destabilizing changes in thruster characteristics. An o-board vision system provides high-bandwidth position feedback, which is then digitally ltered and dierentiated to provide velocity feedback. On-board accelerometers and an angular-rate sensor are used to provide base-acceleration measurements used by the failuredetection and control-reconguration capability. The FCA neural-network thruster-mapping component described in Section was implemented on the on-board Motorola 6 processor, as was the rest of the control system (at a Hz sample rate). Figure shows the robot thruster layout in nominal and failed congurations. The magnitude and direction of each thruster is shown. Nominally, each thruster produces Newton of force, directed as shown. The failures were produced by mechanically changing the thrusters. Failures include: halfstrength, plugged completely, angled at, and angled at 9. The 9 failure mode places high demands on the control reconguration system, since it destabilizes the robot (changing the direction of torque results in positive feedback)... Failure Detection Accelerometers and an angular-rate sensor (that is digitally dierentiated) produce acceleration signals in (x; y; ). These signals are ltered and passed to a system identication process based upon recursive least squares. The parameters identied are the accelerations in (x; y; ) resulting from each thruster ring. When a failure is detected, the thruster is excited articially to speed up the identication process. For the case presented here, with 6 of thrusters failed, the ID process took about seconds from when the rst thruster red until the last thruster was identied to a high level of condence. Since the neural network is training in parallel with this process, stabilization occurs within seconds, and the robot remains well-controlled during the identication. Results from the reconguration are shown in Figure 6. The static control deadband is approximately cm in translation and in rotation. Due to noisy sensors and multiple thrusters ring simultaneously, it takes about seconds before the ID process is condent enough to conrm a thruster failure and begin reconguration. An initial stabilizing reconguration occurs nearly instantaneously as a linear approximate solution is quickly calculated and implemented via the Fully Connected Architecture. Once failures have been detected, they are labelled as \suspects" and are excited articially to expedite identication. The neural-network training process shown in Figure runs in parallel with the ID process, with the robot model updated continually by the ID. The network being trained is copied periodically into the network that is controlling the robot... Rapid Reconguration Very rapid learning is possible due rst to the FCA, and due second to the growing of the network. With few hidden neurons, quick learning takes place since fewer computations are required, and fewer training patterns are required (to avoid overtting). The network begins with inputs, hidden neurons, and outputs, and gradually grows to or more hidden neurons as training progresses. New hidden neurons are added when performance begins to plateau. To prevent overtting, the training-set size is grown proportionally with the number of hidden neurons. With this arrangement, a mapping with about % error above optimal results in seconds, % above optimal within 6 seconds, and % above optimal Due to the use of discrete-valued actuators, there is almost always a force error vector. The error value reported here indicates that the average magnitude of the force error vector is. times the magnitude achievable with an exhaustive search.

5 error [cm,deg] Thruster Number 7 6 neural-network training under way training with knowledge of all failures artificial excitation, ID 6 7 time [sec] Six thrusters are severely misconfigured, as in figure t <, robot within deadband, no thrusters firing t =, small disturbance applied to robot t =, thrusters,,6 suspected, stabilizing mapper loaded, training begins t = 6, sixth and final thruster failure confirmed to high level of confidence t =, all thruster characteristics confirmed t, neural-network optimization continues 6 7 time [sec] thruster is a candidate for artificial excitation thruster failure is suspected, but not yet confirmed X axis Y axis Yaw angle Figure 6: Reconguration Experiment, Position Errors, Thruster Firing [x; y; ] position errors and thruster signals are plotted during autonomous reconguration. Black rectangular regions indicate periods of thruster ring. Darkly-shaded regions indicate the time during which the thruster was suspected. In addition to articial excitation of the suspected thrusters, excitation of other thrusters is used to expedite the identication process. These periods are indicated by the lightly-shaded regions. The robot begins at rest within the deadband, is disturbed at t =, stabilizes itself within seconds, and completes identication (aided with articial excitation) after seconds. The neuralnetwork thruster mapper continues to optimize after the identication is complete. within seconds. As more hidden neurons are added, the network performance approaches optimality, but at the expense of a slower training rate. 6. Summary and Conclusions This paper has presented an adaptive neural-networkbased control system for a free-ying space robot. This control system has a structure that is modelled after a conventional indirect adaptive controller, with the neural network used to implement the nonlinear adaptive component. Specic procedures for determining neural-network applicability are outlined in the paper. Two neural-network-control developments were critical to achieving quick adaptation and a near-optimal controller. First, a \Fully-Connected Architecture" (FCA) was used that has the ability to incorporate an a priori approximate linear solution instantly; this permits quick stabilization by an approximate linear controller. Second, a learning method was used that allows gradient-based optimization (backpropagation) with discrete-valued functions, in this case, the on-o thrusters. The control system was demonstrated experimentally on a laboratory prototype robot, where stable recon- guration to major destabilizing thruster failures occurred within seconds, and near-optimal control was achieved within minutes. References [] W. Thomas Miller III, Richard S. Sutton, and Paul J. Werbos, editors. Neural s for Control. Neural Modeling and Connectionism. The MIT Press, Cambridge, MA, 99. [] James R. Wertz, editor. Spacecraft Attitude Determination and Control. Kluwer Academic Publishers, Boston, 97. [] Edward Wilson and Stephen M. Rock. Neural network control of a free-ying space robot. Simulation, June 99. [] Edward Wilson. Experiments in Neural Control of a Free-Flying Space. PhD thesis, Stanford University, Stanford, CA 9, March 99. [] Marc A. Ullman. Experiments in Autonomous Navigation and Control of Multi-Manipulator, Free- Flying Space s. PhD thesis, Stanford University, Stanford, CA 9, March 99. [6] Edward Wilson and Stephen M. Rock. Experiments in control of a free-ying space robot using fully-connected neural networks. In Proceedings of the World Congress on Neural s, volume, pages 7{6, Portland OR, July 99. [7] Paul J. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University, Cambridge, MA, August 97. [] Edward Wilson. Backpropagation learning for systems with discrete-valued functions. In Proceedings of the World Congress on Neural s, volume, pages {9, San Diego CA, June 99. International Neural Society.

Fimp xcmd xx Imp. Law Des EOM. Flex Kinematics. F ext. Flex EOM. Flex EOM. f 2. τ 2. Arm 2 CT Controller. f 1. τ 1. f cmd. Arm 1 CT Controller.

Fimp xcmd xx Imp. Law Des EOM. Flex Kinematics. F ext. Flex EOM. Flex EOM. f 2. τ 2. Arm 2 CT Controller. f 1. τ 1. f cmd. Arm 1 CT Controller. Proceedings of the International Conference on Robotics and Automation, IEEE, May 994 Experiments in Object Impedance Control for Flexible Objects David W. Meer Stephen M. Rock y Aerospace Robotics Laboratory

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Active Guidance for a Finless Rocket using Neuroevolution

Active Guidance for a Finless Rocket using Neuroevolution Active Guidance for a Finless Rocket using Neuroevolution Gomez, F.J. & Miikulainen, R. (2003). Genetic and Evolutionary Computation Gecco, 2724, 2084 2095. Introduction Sounding rockets are used for making

More information

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Attitude Regulation About a Fixed Rotation Axis

Attitude Regulation About a Fixed Rotation Axis AIAA Journal of Guidance, Control, & Dynamics Revised Submission, December, 22 Attitude Regulation About a Fixed Rotation Axis Jonathan Lawton Raytheon Systems Inc. Tucson, Arizona 85734 Randal W. Beard

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Mehmet Önder Efe Electrical and Electronics Engineering Boðaziçi University, Bebek 80815, Istanbul,

More information

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009 AN INTRODUCTION TO NEURAL NETWORKS Scott Kuindersma November 12, 2009 SUPERVISED LEARNING We are given some training data: We must learn a function If y is discrete, we call it classification If it is

More information

GP-B Attitude and Translation Control. John Mester Stanford University

GP-B Attitude and Translation Control. John Mester Stanford University GP-B Attitude and Translation Control John Mester Stanford University 1 The GP-B Challenge Gyroscope (G) 10 7 times better than best 'modeled' inertial navigation gyros Telescope (T) 10 3 times better

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information

Control of the Laser Interferometer Space Antenna

Control of the Laser Interferometer Space Antenna Control of the Laser Interferometer Space Antenna P. G. Maghami, T. T. Hyde NASA Goddard Space Flight Center Guidance, Navigation and Control Division Greenbelt, MD 20771 J. Kim Swales Aerospace, Inc.

More information

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

More information

A Blade Element Approach to Modeling Aerodynamic Flight of an Insect-scale Robot

A Blade Element Approach to Modeling Aerodynamic Flight of an Insect-scale Robot A Blade Element Approach to Modeling Aerodynamic Flight of an Insect-scale Robot Taylor S. Clawson, Sawyer B. Fuller Robert J. Wood, Silvia Ferrari American Control Conference Seattle, WA May 25, 2016

More information

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp Input Selection with Partial Retraining Pierre van de Laar, Stan Gielen, and Tom Heskes RWCP? Novel Functions SNN?? Laboratory, Dept. of Medical Physics and Biophysics, University of Nijmegen, The Netherlands.

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Neural Network Control of Robot Manipulators and Nonlinear Systems

Neural Network Control of Robot Manipulators and Nonlinear Systems Neural Network Control of Robot Manipulators and Nonlinear Systems F.L. LEWIS Automation and Robotics Research Institute The University of Texas at Arlington S. JAG ANNATHAN Systems and Controls Research

More information

Precision Attitude and Translation Control Design and Optimization

Precision Attitude and Translation Control Design and Optimization Precision Attitude and Translation Control Design and Optimization John Mester and Saps Buchman Hansen Experimental Physics Laboratory, Stanford University, Stanford, California, U.S.A. Abstract Future

More information

Chapter 2 Review of Linear and Nonlinear Controller Designs

Chapter 2 Review of Linear and Nonlinear Controller Designs Chapter 2 Review of Linear and Nonlinear Controller Designs This Chapter reviews several flight controller designs for unmanned rotorcraft. 1 Flight control systems have been proposed and tested on a wide

More information

Quadrotor Modeling and Control for DLO Transportation

Quadrotor Modeling and Control for DLO Transportation Quadrotor Modeling and Control for DLO Transportation Thesis dissertation Advisor: Prof. Manuel Graña Computational Intelligence Group University of the Basque Country (UPV/EHU) Donostia Jun 24, 2016 Abstract

More information

Feedback Control of Spacecraft Rendezvous Maneuvers using Differential Drag

Feedback Control of Spacecraft Rendezvous Maneuvers using Differential Drag Feedback Control of Spacecraft Rendezvous Maneuvers using Differential Drag D. Pérez 1 and R. Bevilacqua Rensselaer Polytechnic Institute, Troy, New York, 1180 This work presents a feedback control strategy

More information

Neural Networks. Henrik I. Christensen. Computer Science and Engineering University of California, San Diego

Neural Networks. Henrik I. Christensen. Computer Science and Engineering University of California, San Diego Neural Networks Henrik I. Christensen Computer Science and Engineering University of California, San Diego http://www.hichristensen.net Henrik I. Christensen (UCSD) Neural Networks 1 / 39 Introduction

More information

Enhancing a Model-Free Adaptive Controller through Evolutionary Computation

Enhancing a Model-Free Adaptive Controller through Evolutionary Computation Enhancing a Model-Free Adaptive Controller through Evolutionary Computation Anthony Clark, Philip McKinley, and Xiaobo Tan Michigan State University, East Lansing, USA Aquatic Robots Practical uses autonomous

More information

Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil

Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil Charles W. Anderson 1, Douglas C. Hittle 2, Alon D. Katz 2, and R. Matt Kretchmar 1 1 Department of Computer Science Colorado

More information

MODULE -4 BAYEIAN LEARNING

MODULE -4 BAYEIAN LEARNING MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities

More information

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems ELEC4631 s Lecture 2: Dynamic Control Systems 7 March 2011 Overview of dynamic control systems Goals of Controller design Autonomous dynamic systems Linear Multi-input multi-output (MIMO) systems Bat flight

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Selected method of artificial intelligence in modelling safe movement of ships

Selected method of artificial intelligence in modelling safe movement of ships Safety and Security Engineering II 391 Selected method of artificial intelligence in modelling safe movement of ships J. Malecki Faculty of Mechanical and Electrical Engineering, Naval University of Gdynia,

More information

Trajectory planning and feedforward design for electromechanical motion systems version 2

Trajectory planning and feedforward design for electromechanical motion systems version 2 2 Trajectory planning and feedforward design for electromechanical motion systems version 2 Report nr. DCT 2003-8 Paul Lambrechts Email: P.F.Lambrechts@tue.nl April, 2003 Abstract This report considers

More information

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Abstract Most neural networks

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

Estimation of Inelastic Response Spectra Using Artificial Neural Networks

Estimation of Inelastic Response Spectra Using Artificial Neural Networks Estimation of Inelastic Response Spectra Using Artificial Neural Networks J. Bojórquez & S.E. Ruiz Universidad Nacional Autónoma de México, México E. Bojórquez Universidad Autónoma de Sinaloa, México SUMMARY:

More information

Analysis and Design of Hybrid AI/Control Systems

Analysis and Design of Hybrid AI/Control Systems Analysis and Design of Hybrid AI/Control Systems Glen Henshaw, PhD (formerly) Space Systems Laboratory University of Maryland,College Park 13 May 2011 Dynamically Complex Vehicles Increased deployment

More information

CS545 Contents XVI. Adaptive Control. Reading Assignment for Next Class. u Model Reference Adaptive Control. u Self-Tuning Regulators

CS545 Contents XVI. Adaptive Control. Reading Assignment for Next Class. u Model Reference Adaptive Control. u Self-Tuning Regulators CS545 Contents XVI Adaptive Control u Model Reference Adaptive Control u Self-Tuning Regulators u Linear Regression u Recursive Least Squares u Gradient Descent u Feedback-Error Learning Reading Assignment

More information

Model Estimation and Neural Network. Puneet Goel, Goksel Dedeoglu, Stergios I. Roumeliotis, Gaurav S. Sukhatme

Model Estimation and Neural Network. Puneet Goel, Goksel Dedeoglu, Stergios I. Roumeliotis, Gaurav S. Sukhatme Fault Detection and Identication in a Mobile Robot Using Multiple Model Estimation and Neural Network Puneet Goel, Goksel Dedeoglu, Stergios I. Roumeliotis, Gaurav S. Sukhatme puneetjdedeoglujstergiosjgaurav@robotics:usc:edu

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Speed Control of PMSM Drives by Using Neural Network Controller

Speed Control of PMSM Drives by Using Neural Network Controller Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 4, Number 4 (2014), pp. 353-360 Research India Publications http://www.ripublication.com/aeee.htm Speed Control of PMSM Drives by

More information

Neural Network Identification of Non Linear Systems Using State Space Techniques.

Neural Network Identification of Non Linear Systems Using State Space Techniques. Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica

More information

Quadrotor Modeling and Control

Quadrotor Modeling and Control 16-311 Introduction to Robotics Guest Lecture on Aerial Robotics Quadrotor Modeling and Control Nathan Michael February 05, 2014 Lecture Outline Modeling: Dynamic model from first principles Propeller

More information

Identication and Control of Nonlinear Systems Using. Neural Network Models: Design and Stability Analysis. Marios M. Polycarpou and Petros A.

Identication and Control of Nonlinear Systems Using. Neural Network Models: Design and Stability Analysis. Marios M. Polycarpou and Petros A. Identication and Control of Nonlinear Systems Using Neural Network Models: Design and Stability Analysis by Marios M. Polycarpou and Petros A. Ioannou Report 91-09-01 September 1991 Identication and Control

More information

Effect of number of hidden neurons on learning in large-scale layered neural networks

Effect of number of hidden neurons on learning in large-scale layered neural networks ICROS-SICE International Joint Conference 009 August 18-1, 009, Fukuoka International Congress Center, Japan Effect of on learning in large-scale layered neural networks Katsunari Shibata (Oita Univ.;

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

General procedure for formulation of robot dynamics STEP 1 STEP 3. Module 9 : Robot Dynamics & controls

General procedure for formulation of robot dynamics STEP 1 STEP 3. Module 9 : Robot Dynamics & controls Module 9 : Robot Dynamics & controls Lecture 32 : General procedure for dynamics equation forming and introduction to control Objectives In this course you will learn the following Lagrangian Formulation

More information

Book review for Stability and Control of Dynamical Systems with Applications: A tribute to Anthony M. Michel

Book review for Stability and Control of Dynamical Systems with Applications: A tribute to Anthony M. Michel To appear in International Journal of Hybrid Systems c 2004 Nonpareil Publishers Book review for Stability and Control of Dynamical Systems with Applications: A tribute to Anthony M. Michel João Hespanha

More information

Multilayer Perceptrons (MLPs)

Multilayer Perceptrons (MLPs) CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Design of Advanced Control Techniques for an Underwater Vehicle

Design of Advanced Control Techniques for an Underwater Vehicle Design of Advanced Control Techniques for an Underwater Vehicle Divine Maalouf Advisors: Vincent Creuze Ahmed Chemori René Zapata 5 juillet 2012 OUTLINE I. Introduction: Problems/Challenges II. Modeling

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction 1.1 Introduction to Chapter This chapter starts by describing the problems addressed by the project. The aims and objectives of the research are outlined and novel ideas discovered

More information

Position in the xy plane y position x position

Position in the xy plane y position x position Robust Control of an Underactuated Surface Vessel with Thruster Dynamics K. Y. Pettersen and O. Egeland Department of Engineering Cybernetics Norwegian Uniersity of Science and Technology N- Trondheim,

More information

Experiments on Stabilization of the Hanging Equilibrium of a 3D Asymmetric Rigid Pendulum

Experiments on Stabilization of the Hanging Equilibrium of a 3D Asymmetric Rigid Pendulum Proceedings of the 25 IEEE Conference on Control Applications Toronto, Canada, August 28-3, 25 MB4.5 Experiments on Stabilization of the Hanging Equilibrium of a 3D Asymmetric Rigid Pendulum Mario A. Santillo,

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

Autonomous Helicopter Flight via Reinforcement Learning

Autonomous Helicopter Flight via Reinforcement Learning Autonomous Helicopter Flight via Reinforcement Learning Authors: Andrew Y. Ng, H. Jin Kim, Michael I. Jordan, Shankar Sastry Presenters: Shiv Ballianda, Jerrolyn Hebert, Shuiwang Ji, Kenley Malveaux, Huy

More information

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering

Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Bernard Widrow and Gregory L. Plett Department of Electrical Engineering, Stanford University, Stanford, CA 94305-9510 Abstract

More information

ECE521 Lecture 7/8. Logistic Regression

ECE521 Lecture 7/8. Logistic Regression ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression

More information

Virtual Sensor Technology for Process Optimization. Edward Wilson Neural Applications Corporation

Virtual Sensor Technology for Process Optimization. Edward Wilson Neural Applications Corporation Virtual Sensor Technology for Process Optimization Edward Wilson Neural Applications Corporation ewilson@neural.com Virtual Sensor (VS) Also known as soft sensor, smart sensor, estimator, etc. Used in

More information

with Application to Autonomous Vehicles

with Application to Autonomous Vehicles Nonlinear with Application to Autonomous Vehicles (Ph.D. Candidate) C. Silvestre (Supervisor) P. Oliveira (Co-supervisor) Institute for s and Robotics Instituto Superior Técnico Portugal January 2010 Presentation

More information

Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive

Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive Leonardo Electronic Journal of Practices and Technologies ISSN 1583-1078 Issue 6, January-June 2005 p. 1-16 Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive

More information

Neural Networks: Backpropagation

Neural Networks: Backpropagation Neural Networks: Backpropagation Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others

More information

A SELF-TUNING KALMAN FILTER FOR AUTONOMOUS SPACECRAFT NAVIGATION

A SELF-TUNING KALMAN FILTER FOR AUTONOMOUS SPACECRAFT NAVIGATION A SELF-TUNING KALMAN FILTER FOR AUTONOMOUS SPACECRAFT NAVIGATION Son H. Truong National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) Greenbelt, Maryland, USA 2771 E-mail:

More information

Integrator Backstepping using Barrier Functions for Systems with Multiple State Constraints

Integrator Backstepping using Barrier Functions for Systems with Multiple State Constraints Integrator Backstepping using Barrier Functions for Systems with Multiple State Constraints Khoi Ngo Dep. Engineering, Australian National University, Australia Robert Mahony Dep. Engineering, Australian

More information

In Situ Adaptive Tabulation for Real-Time Control

In Situ Adaptive Tabulation for Real-Time Control In Situ Adaptive Tabulation for Real-Time Control J. D. Hedengren T. F. Edgar The University of Teas at Austin 2004 American Control Conference Boston, MA Outline Model reduction and computational reduction

More information

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines COMP9444 17s2 Boltzmann Machines 1 Outline Content Addressable Memory Hopfield Network Generative Models Boltzmann Machine Restricted Boltzmann

More information

STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAMIC SYSTEMS MODELING

STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAMIC SYSTEMS MODELING STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAIC SYSTES ODELING J. CODINA, R. VILLÀ and J.. FUERTES UPC-Facultat d Informàtica de Barcelona, Department of Automatic Control and Computer Engineeering, Pau

More information

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller 2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that

More information

Lecture Module 5: Introduction to Attitude Stabilization and Control

Lecture Module 5: Introduction to Attitude Stabilization and Control 1 Lecture Module 5: Introduction to Attitude Stabilization and Control Lectures 1-3 Stability is referred to as a system s behaviour to external/internal disturbances (small) in/from equilibrium states.

More information

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 Multi-layer networks Steve Renals Machine Learning Practical MLP Lecture 3 7 October 2015 MLP Lecture 3 Multi-layer networks 2 What Do Single

More information

DISTURBANCE ATTENUATION IN A MAGNETIC LEVITATION SYSTEM WITH ACCELERATION FEEDBACK

DISTURBANCE ATTENUATION IN A MAGNETIC LEVITATION SYSTEM WITH ACCELERATION FEEDBACK DISTURBANCE ATTENUATION IN A MAGNETIC LEVITATION SYSTEM WITH ACCELERATION FEEDBACK Feng Tian Department of Mechanical Engineering Marquette University Milwaukee, WI 53233 USA Email: feng.tian@mu.edu Kevin

More information

UAV Navigation: Airborne Inertial SLAM

UAV Navigation: Airborne Inertial SLAM Introduction UAV Navigation: Airborne Inertial SLAM Jonghyuk Kim Faculty of Engineering and Information Technology Australian National University, Australia Salah Sukkarieh ARC Centre of Excellence in

More information

Demand and Trip Prediction in Bike Share Systems

Demand and Trip Prediction in Bike Share Systems Demand and Trip Prediction in Bike Share Systems Team members: Zhaonan Qu SUNet ID: zhaonanq December 16, 2017 1 Abstract 2 Introduction Bike Share systems are becoming increasingly popular in urban areas.

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,

More information

CS545 Contents XVI. l Adaptive Control. l Reading Assignment for Next Class

CS545 Contents XVI. l Adaptive Control. l Reading Assignment for Next Class CS545 Contents XVI Adaptive Control Model Reference Adaptive Control Self-Tuning Regulators Linear Regression Recursive Least Squares Gradient Descent Feedback-Error Learning Reading Assignment for Next

More information

Deep Learning book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville

Deep Learning book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville Deep Learning book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville Chapter 6 :Deep Feedforward Networks Benoit Massé Dionyssos Kounades-Bastian Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward

More information

Deep Neural Networks (1) Hidden layers; Back-propagation

Deep Neural Networks (1) Hidden layers; Back-propagation Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 4 October 2017 / 9 October 2017 MLP Lecture 3 Deep Neural Networs (1) 1 Recap: Softmax single

More information

Using SDM to Train Neural Networks for Solving Modal Sensitivity Problems

Using SDM to Train Neural Networks for Solving Modal Sensitivity Problems Using SDM to Train Neural Networks for Solving Modal Sensitivity Problems Brian J. Schwarz, Patrick L. McHargue, & Mark H. Richardson Vibrant Technology, Inc. 18141 Main Street Jamestown, California 95327

More information

FIBER OPTIC GYRO-BASED ATTITUDE DETERMINATION FOR HIGH- PERFORMANCE TARGET TRACKING

FIBER OPTIC GYRO-BASED ATTITUDE DETERMINATION FOR HIGH- PERFORMANCE TARGET TRACKING FIBER OPTIC GYRO-BASED ATTITUDE DETERMINATION FOR HIGH- PERFORMANCE TARGET TRACKING Elias F. Solorzano University of Toronto (Space Flight Laboratory) Toronto, ON (Canada) August 10 th, 2016 30 th AIAA/USU

More information

inear Adaptive Inverse Control

inear Adaptive Inverse Control Proceedings of the 36th Conference on Decision & Control San Diego, California USA December 1997 inear Adaptive nverse Control WM15 1:50 Bernard Widrow and Gregory L. Plett Department of Electrical Engineering,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Implementing an Intelligent Error Back Propagation (EBP) Relay in PSCAD TM /EMTDC 4.2.1

Implementing an Intelligent Error Back Propagation (EBP) Relay in PSCAD TM /EMTDC 4.2.1 1 Implementing an Intelligent Error Back Propagation (EBP) Relay in PSCAD TM /EMTDC 4.2.1 E. William, IEEE Student Member, Brian K Johnson, IEEE Senior Member, M. Manic, IEEE Senior Member Abstract Power

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Stanford Machine Learning - Week V

Stanford Machine Learning - Week V Stanford Machine Learning - Week V Eric N Johnson August 13, 2016 1 Neural Networks: Learning What learning algorithm is used by a neural network to produce parameters for a model? Suppose we have a neural

More information

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Nonlinear System Identification Given a data set Z N = {y(k),

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks

Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks C. James Li and Tung-Yung Huang Department of Mechanical Engineering, Aeronautical Engineering

More information

ON THE CORRELATION OF GROUND MOTION INDICES TO DAMAGE OF STRUCTURE MODELS

ON THE CORRELATION OF GROUND MOTION INDICES TO DAMAGE OF STRUCTURE MODELS 3 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August -6, 24 Paper No. 74 ON THE CORRELATION OF GROUND MOTION INDICES TO DAMAGE OF STRUCTURE MODELS Gilbert MOLAS, Mohsen RAHNAMA

More information