MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

Similar documents
MATH4406 (Control Theory) Unit 1: Introduction to Control Prepared by Yoni Nazarathy, August 11, 2014

Overview of the Seminar Topic

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Optimal Control. McGill COMP 765 Oct 3 rd, 2017


Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

EE C128 / ME C134 Feedback Control Systems

Comparison of Feedback Controller for Link Stabilizing Units of the Laser Based Synchronization System used at the European XFEL

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

ECEN 420 LINEAR CONTROL SYSTEMS. Instructor: S. P. Bhattacharyya* (Dr. B.) 1/18

D(s) G(s) A control system design definition

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems Design

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

OPTIMAL CONTROL AND ESTIMATION

Introduction & Laplace Transforms Lectures 1 & 2

EL2520 Control Theory and Practice

An Introduction to Control Systems

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Autonomous Mobile Robot Design

Linear State Feedback Controller Design

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Control Systems. EC / EE / IN. For

Optimal Control Theory SF 2852

Control Systems Lab - SC4070 Control techniques

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002

Modeling and Control Overview

Automatic Control II Computer exercise 3. LQG Design

Lecture 12. Upcoming labs: Final Exam on 12/21/2015 (Monday)10:30-12:30

Controls Problems for Qualifying Exam - Spring 2014

Video 5.1 Vijay Kumar and Ani Hsieh

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

1 An Overview and Brief History of Feedback Control 1. 2 Dynamic Models 23. Contents. Preface. xiii

Modern Optimal Control

EEE582 Homework Problems

Optimal Control Theory SF 2852

FUZZY CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL

CHAPTER 5 ROBUSTNESS ANALYSIS OF THE CONTROLLER

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

DEPARTMENT OF ELECTRICAL AND ELECTRONIC ENGINEERING EXAMINATIONS 2010

Optimal control and estimation

Black Boxes & White Noise

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Reglerteknik, TNG028. Lecture 1. Anna Lombardi

Integral action in state feedback control

ECE317 : Feedback and Control

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Preliminaries

Robotics. Control Theory. Marc Toussaint U Stuttgart

Today s goals So far Today 2.004

Outline. Classical Control. Lecture 1

Solid foundations has been a hallmark of the control community

LINEAR QUADRATIC GAUSSIAN

Lecture 1: Feedback Control Loop

Structural System Identification (KAIST, Summer 2017) Lecture Coverage:

Probability and Statistics for Final Year Engineering Students

LMI Methods in Optimal and Robust Control

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08

10 Transfer Matrix Models

Alireza Mousavi Brunel University

Design Methods for Control Systems

Lecture 12. AO Control Theory

Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018

Pole placement control: state space and polynomial approaches Lecture 2

Dr Ian R. Manchester

Lecture 20 Aspects of Control

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 4 CONTROL. Prof. Steven Waslander

Course roadmap. ME451: Control Systems. Example of Laplace transform. Lecture 2 Laplace transform. Laplace transform

Deterministic Dynamic Programming

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

ECE317 : Feedback and Control

CDS 101/110a: Lecture 8-1 Frequency Domain Design

6 OUTPUT FEEDBACK DESIGN

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42

Chapter 7 Control. Part Classical Control. Mobile Robotics - Prof Alonzo Kelly, CMU RI

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

FAULT-TOLERANT CONTROL OF CHEMICAL PROCESS SYSTEMS USING COMMUNICATION NETWORKS. Nael H. El-Farra, Adiwinata Gani & Panagiotis D.

Lecture 25: Tue Nov 27, 2018

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard

A brief introduction to robust H control

Controller Coefficient Truncation Using Lyapunov Performance Certificate

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

The Role of Exosystems in Output Regulation

Feedback Control of Linear SISO systems. Process Dynamics and Control

OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS

Process Modelling, Identification, and Control

TRACKING AND DISTURBANCE REJECTION

Lecture 5 Classical Control Overview III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

CDS 101/110a: Lecture 1.1 Introduction to Feedback & Control. CDS 101/110 Course Sequence

Aircraft Flight Dynamics!

SAMPLE EXAMINATION PAPER (with numerical answers)

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

Linear Model Predictive Control for Queueing Networks in Manufacturing and Road Traffic

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Vortex Model Based Adaptive Flight Control Using Synthetic Jets

Transcription:

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

Unit Outline Introduction to the course: Course goals, assessment, etc... What is Control Theory A bit of jargon, history and applications A survey of the mathematical problems handled in this course: Units 2 11

Course Goal A mathematically mature student will get a taste of several of the key paradigms and methods appearing in Control Theory. As well as in Systems Theory. Following this course the student will have capacity to independently dive deeper into enginerring control pardigms and/or optimal control theory.

Assessment 7 HW 4 Quizzes Final course summary

Mode of Presentation Board Lecture Notes Slides (presentation) or... mixes of the above

What is Control Theory Hard to find a precise definition. So here are some terms very strongly related: Dynamical systems and systems theory Linear vs. non-linear systems Open Loop vs. Closed Loop (Feedback) Optimal trajectories Trajectory tracking Disturbance rejection Stabilization vs. fine tuning Chemical systems, Electrical systems, Mechanical systems, Biological systems, Systems in logistics... Adaptive control and learning

A system (plant) state: x R n input: u R m output: y R k The basic system model: ẋ(t) = f ( x(t), u(t), t ), y(t) = h ( x(t), u(t), t ). Or in discrete time, x(n + 1) = f ( x(n), u(n), n ), y(n) = h ( x(n), u(n), n ).

Open loop control ẋ(t) = f ( x(t), u(t), t ), y(t) = h ( x(t), u(t), t ). Choose a good input u(t) from onset (say based on x(0)). y(t) is not relevant. Open loop optimal control deals with choosing u(t) as to, ( ) min J {x(t), u(t), 0 t T max }, subject to constraints on x and u. E.g. Make a thrust plan for a spaceship.

Closed loop (feedback) control ẋ(t) = f ( x(t), u(t), t ), y(t) = h ( x(t), u(t), t ). Choose (design) a controller (control law), g( ): u(t) = g ( y(t), t ). The design may be optimal w.r.t. some performance measure, or may be sensible.

Combination of open loop and closed loop Consider a jet flight from Brisbane to Los Angeles (approx 13 : 45). ẋ(t) = f ( x(t), u(t), t ). Dynamical system modeling: What are possibilities for x? For u? Why is the problem time dependent? Open loop methods can be used to set the best u( ). Call this a reference control. This also yields a reference state. We now have, x r (t) and u r (t) and we know that, if the reference is achieved then the output is, y r (t) = h ( x r (t), u r (t), t ). We can now try to find a controller g( ), u(t) = g ( y(t), t, x r, u r ), such that x(t) x r (t) behaves well.

Combination of open loop and closed loop, cont. The trajectory tracking problem of having x(t) x r (t) behave well is essentially called the regularization problem. Behaving well stands for: Ideally 0 Very important: lim t x(t) x r (t) = 0 (stability) Also important: How x(t) x r (t) is regulated to 0. E.g. fast/slow. E.g. with many osculations or not Many (perhaps most) controllers are regulators and are not even associated with a trajectory, instead they try to regulate the system at a set point. E.g. cruise control in cars.

The Basic Feedback Loop plant controller actuators sensors.

(Historic) Applications of Control Theory Watt s fly-ball governor The feedback amplifier Aviation Space programs 50 s, 60 s Everywhere today: Head position control of hard disk drives (typically PID controllers)

Key Developers of (Highly Used) Ideas and Methods Up to 60 s: E. J. Routh A. M. Lyapunov H. Nyquist W. R. Evans R. Bellman L. S. Pontryagin R. E. Kalman The developers of Model Predictive Control (MPC)

Implementations Mechanical Analog Digital

A survey of the mathematical problems handled in this course

Unit 2: Signals, Systems and Math Background Essentials of Signals and Systems (no state-space yet) Convolutions, integral transforms (Laplace, Fourier, Z...) and generalized functions (δ) The unit ends with negative feedback: Consider a system with plant transfer function, H(s), and negative feedback controller, G(s). The transfer function of the whole system is: H(s) Q(s) = 1 + G(s)H(s).

Unit 3: Elements of Classic Engineering Control The main object is, Q(s) = H(s) 1 + G(s)H(s). The objectives in controller (regulator) design PID Controllers (Proportional, Integral, Derivative) The Nyquist Stability Criterion

Unit 4: Linear State-Space Systems and Control The main object of study is, ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t). Together with the discrete time analog. When can such system be well controlled (e.g. stabilized)? Linear state feedback controllers are of the form u(t) = Ky(t) for some matrix K Is there a way to estimate x based on y? The Luenberger observer The Kalman decomposition This unit is allocated plenty of time (approx. 9 lecture hours).

Unit 5: Lyapunov Stability Theory Here we leave control for a minute and look at the autonomous system: ẋ(t) = f ( x(t) ). Note that this system may be the system after a regulating control law is applied. An equilibrium point is a point x 0, such that f (x 0 ) = 0. Lyapunov s stability results provide methods for verifying if equilibrium points are stable (in one of several senses). One way is to linearize f ( ) and see if the point is locally stable. Another (stronger) way is to find a so called Lyapunov function a.k.a. summarizing function or energy function.

Unit 6: LQR and MPC Getting into optimal control for the first time in the course. The Linear Quadratic Regulator (LQR) problem deals with fully observed linear dynamics: ẋ(t) = Ax(t) + Bu(t), and can be posed as follows: min u( ) T x(t ) Q f x(t ) + x(t) Qx(t) + u(t) Ru(t) dt 0 It is beautiful that the solution is of the form u(t) = Kx(t) (linear state feedback) where the matrix K can be found by what is called a Ricatti equation The unit only summarizes results that are to be proved down the road after both dynamic programming and calculus of variations are studied. The second part of the unit introduces Model Predictive Controller variations of LQR in which u is constrained to be in a set depending on x. In this case quadratic programs arise.

Unit 7: Dynamic Programming Bellman s principle of optimality can be briefly described as follows: If x-y-w-z is the optimal path from x to z then y-w-z is the optimal path from y to z. The unit generally deals with systems of the form ẋ(t) = f ( x(t), u(t) ) and a cost function: J ( ) T x(t 0 ), t 0 = min C ( x(s), u(s) ) ds + D ( x(t ) ). u t 0 Bellman s optimality principle implies that the optimal cost satisfies: J ( x(t), t ) = min u C ( x(t + dt), u(t + dt) ) dt + J ( x(t + dt), t + dt ). This leads to the Hamilton-Jacobi-Bellman PDE which will be studied in the unit.

Unit 8: Calculus of Variations and Pontryagin s Minimum Principle Calculus of variations is a classic subject dealing with finding optimal functions. E.g., find a function, u( ) such that, u(a) = a u, u(b) = b u and the following integral is kept minimal: b a 1 + u (t) 2 u dt. (t) Question: What is the best u( ) if we remove the denominator? Pontryagin s Minimum Principle is a result allowing to find u( ) in the presence of constraints, generalising some results of the calculus of variations, yet suited to optimal control.

Unit 9: Back to LQR Now that the essentials of dynamic programming and Pontryagin s minimum principle are known - we present two proofs (using both methods) for the celebrated LQR result (of Unit 6).

Unit 10: Systems with Stochastic Noise The Kalman filter deals with the following type of model: x(n + 1) = Ax(n) + Bu(n) + ξ 1 (n), y(n) = Cx(n) + Du(n) + ξ 2 (n), where ξ i are noise processes, typically taken to be Gaussian. The Kalman filter is a way to optimally reconstruct x based on y. It is a key ingredient in many tracking, navigation and signal processing applications. Related is the Linear Quadratic Gaussian Regulator (LQG) which generalises the LQR by incorporating noise processes as in the Kalman filter. This is the only stochastic unit in the course.

Unit 11: Closure The course is just the tip of the ice-berg. There is much-much more to control theory practice and research. Some further developments will be surveyed: Control of Non-linear (smooth) systems (e.g. in Robotics) Adaptive Control Robust Control Applications of linear matrix inequalities Supervisory control Control of inherently stochastic systems Control of stochastic queueing networks (***)

Last Slide Suggestion: Wiki the terms appearing in this presentation to get a feel for the scope of the course.