ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Similar documents
Problem 2 (Gaussian Elimination, Fundamental Spaces, Least Squares, Minimum Norm) Consider the following linear algebraic system of equations:

CONTROL DESIGN FOR SET POINT TRACKING

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Linear System Theory

Control Systems. Laplace domain analysis

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Introduction to Modern Control MT 2016

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

EEE582 Homework Problems

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Module 03 Linear Systems Theory: Necessary Background

1 Continuous-time Systems

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Intro. Computer Control Systems: F8

João P. Hespanha. January 16, 2009

Controllability, Observability, Full State Feedback, Observer Based Control

Linear State Feedback Controller Design

Zeros and zero dynamics

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Topic # /31 Feedback Control Systems

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

Topic # Feedback Control Systems

Control Systems. Frequency domain analysis. L. Lanari

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Identification Methods for Structural Systems

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Control Systems Design

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

Autonomous Mobile Robot Design

5. Observer-based Controller Design

Lecture 3. Chapter 4: Elements of Linear System Theory. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Control Systems. System response. L. Lanari

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

EE Control Systems LECTURE 9

CDS 101/110: Lecture 3.1 Linear Systems

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

6.241 Dynamic Systems and Control

EE263: Introduction to Linear Dynamical Systems Review Session 6

Controls Problems for Qualifying Exam - Spring 2014

Lecture 7 (Weeks 13-14)

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

Transfer function and linearization

Control Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MULTIVARIABLE ZEROS OF STATE-SPACE SYSTEMS

D(s) G(s) A control system design definition

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

EE C128 / ME C134 Final Exam Fall 2014

(Continued on next page)

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Intro. Computer Control Systems: F9

Robust Control 2 Controllability, Observability & Transfer Functions

Design Methods for Control Systems

Lecture notes: Applied linear algebra Part 1. Version 2

Topic # Feedback Control

L2 gains and system approximation quality 1

COMP 558 lecture 18 Nov. 15, 2010

1. Select the unique answer (choice) for each problem. Write only the answer.

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Problem Set 5 Solutions 1

Lecture 2. Linear Systems

Singular Value Decomposition Analysis

21 Linear State-Space Representations

First-Order Low-Pass Filter

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Advanced Control Theory

1. General Vector Spaces

MEM 255 Introduction to Control Systems: Modeling & analyzing systems

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

1 Controllability and Observability

SYSTEMTEORI - ÖVNING 5: FEEDBACK, POLE ASSIGNMENT AND OBSERVER

Chap. 3. Controlled Systems, Controllability

ESC794: Special Topics: Model Predictive Control

Linear Algebra Primer

CDS 101/110: Lecture 3.1 Linear Systems

Observability. It was the property in Lyapunov stability which allowed us to resolve that

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

6.241 Dynamic Systems and Control

Automatic Control Systems theory overview (discrete time systems)

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T

The disturbance decoupling problem (DDP)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

University of Toronto Department of Electrical and Computer Engineering ECE410F Control Systems Problem Set #3 Solutions = Q o = CA.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Lecture Summaries for Linear Algebra M51A

Linear Algebra- Final Exam Review

Module 08 Observability and State Estimator Design of Dynamical LTI Systems

Transcription:

EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and should be used as a checklist or study guide as you proceed through the subject matter. While a few significant formulae and abstract concepts are presented, important details are supplied within the text. SISO Concepts (Pre-requisite Knowledge) Linearity, Time Invariance, Linear Time Invariant (LTI) Systems Laplace Transforms Basics; e.g. basic transform pairs, properties, partial fraction expansions, etc. Solution of ordinary differential equations (ODEs) via Laplace transforms; e.g. first, second, and third order odes Transfer Functions Poles (term used by engineers, mathematicians use the term eigenvalue) Complex Arithmetic Stability - all poles in open left half plane; i.e. Res<0 Step Response, Impulse Response Initial Condition Response (Zero Input Response) Forced Response and Convolution Zeros Frequency Response (Magnitude and Phase Response) Bode Plots Sinusoidal Steady State Analysis: For a stable LTI system H, if u(t) =A sin(ω o t + θ) y ss = A H(jω o ) sin(ω o t + θ + H(jω o )) (1) This will be referred to as the method of the transfer function (MOTF). It is arguably the most important result in the study of dynamical systems. Why? Because it is the basis for all system testing. Block Diagrams - series, parallel, and feedback interconnections Feedback System Concepts Classical Control Concepts; e.g. Root Locus, Bode, Nyquist Using MATLAB, Simulink, control system toolbox, robust control toolbox Modeling of Dynamical Systems Candidate Dynamical Systems car, inverted pendulum, standard pendulum, spring-mass-dashpot, aircraft; NOTE: You must become very familiar with a few dynamical systems in order to adequately relate the theory to reality. Nonlinear State Space Models ẋ = f(x, u) (2) y = g(x, u) (3) Controls u =[u 1... u m ] T, States x =[x 1... x n ] T, Outputs y =[y 1... y p ] T Above is shorthand for the following m-input p-output multiple-input multiple output (MIMO) system ẋ 1 = f 1 (x 1,...,x n,u 1,...,u m ) (4). ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6). y p = g p (x 1,...,x n,u 1,...,u m ) (7)

All systems are nonlinear. Most dynamical systems may be approximated by linear models - provided that signal/variable excursions are sufficiently small. This is the same idea we learn when we study Taylor series and linearization of nonlinear functions. Equilibria System equilibria are found by solving: f(x, u) =0 (8) This yields equilibrium pairs (x e,u e ). In general, this must be done using some numerical method; e.g. Newton s method. Linearization about Equilibria (x e,u e ) To linearize about the equilibrium (x e,u e ), we let We then proceed as described below. u = u e + δu (9) x = x e + δx (10) y = y e + δy (11) LTI State Space Models Linearization about the equilibrium (x e,u e ) yields the small-signal LTI dynamical system model δẋ = Aδx + Bδu (12) δy = Cδx + Dδu (13) with m controls δu, n states δx, p outputs δy (all representing small perturbations from equilibrium) and a ij = f i x j (xe,u e) c ij = g i x j (xe,u e) It should be noted that the estimate b ij = f i u j (xe,u e) d ij = g i u j (xe,u e) (14) (15) will be a good approximation to the nonlinear system state x when ˆx = x e + δx (16) the linear system initial condition satisfies δx(0) x(0) x e (this ensures that the estimate ˆx starts close to the nonlinear system s initial condition x(0)) and u u e (i.e. u is sufficiently close to u e ) Standard Convention It is standard convention to drop the small signal δ notation and just write ẋ = Ax + Bu (17) y = Cx + Du (18) where it is (implicitly) implied that u, x, and y represent small perturbations from equilibrium Transfer Function Matrix H(s) =C(sI A) 1 B + D (19) This is a p m matrix of transfer functions. When p = m, we say that the system is square. Impulse Response Matrix h(t) =Ce At B + Dδ(t) (20)

DC Gain Matrix H(0) = C( A) 1 B + D (21) Useful to determine steady state output to constant inputs. It is common to perform a singular value decomposition (see below) for this matrix to understand its input-output directionality properties. Response to Inputs (forcing functions) and Initial Conditions One can show that Taking Laplace transforms yields x(t) = e At x o + y(t) = Ce At x o + Note: For each of the above, we have t 0 t e A(t τ) Bu(τ)dτ (22) 0 Ce A(t τ) Bu(τ)dτ + Du(t) (23) X(s) = (si A) 1 x o +(si A) 1 BU(s) (24) Y (s) = C(sI A) 1 x o + H(s)U(s) (25) 1. a term due to initial conditions (zero input response) and 2. a term due to the forcing functions passing through H. This simple additive structure is a consequence of linearity. State Space Arithmetic series, parallel, and feedback interconnections Linear Algebra Square Matrices Determinants, Singular (non-invertible) matrices, Non-Singular (invertible) matrices Non-square Matrices Systems of Linear Algebraic Equations where A C m n. Key Issues: Existence and uniqueness Gaussian Elimination Solving Ax = b via elementary row operations This is critical for understanding existence and uniqueness issues Vector Spaces and Subspaces Generalizes notion of a plane and a line Ax = b (26) Spanning Set of Vectors (Addresses Existence) A set of vectors e i is said to spans a space S if for every x S, there exist constants c i such that x = i c ie i. Linearly Independent Set of Vectors (Addresses Uniqueness) A set of vectors e i is said to be linearly independent if the relationship i c ie i = 0 necessarily implies that all of the c i are zero. Equivalently, no vector is a non-trivial linear combination of the other vectors. Basis (Addresses Existence and Uniqueness) A set of vectors e i is said to be a basis for a space S if the vectors are linearly independent and they span S. Four Fundamental Subspaces Associated with a Matrix A

1. Range or Column Space of A R(A) def = { Ax x C n } (27) This represents the set of all possible vectors b C m that can be generated by A; i.e. the set of all possible column vectors. This is the number of linearly independent columns of A. 2. Row Space of A or Column Space of A H dim R(A) =rank(a) (28) R(A H ) def = { A H v v C m }; A H def = ĀT (29) This is essentially (modulo conjugate transposition) the set of all possible row vectors that can be generated by A. This is the number of linearly independent rows of A. 3. Right Null Space of A dim R(A H )=rank(a) (30) N(A) def = { x Ax =0} (31) This is the set of all column vectors that annihilate A from the right. Hence the name right null space. This represents the number of free variables in solving Ax = b. 4. Left Null Space of A or Right Null Space of A H dim N(A) =n rank(a) (32) N(A A ) def = { v A H v =0} (33) This is essentially (modulo conjugate transposition) the set of row vectors that annihilate A from the left. Hence the name left null space of A. dim N(A H )=m rank(a) (34) This is the number of constraints that b must satisfy for Ax = b to have a solution. General Solution Formed from Gaussian Elimination; Gaussian Elimination yields a basis for the 4 fundamental subspaces Equivalent Conditions for Guaranteed Existence A has full row rank; i.e. A has m linearly independent rows AA H is invertible A is right invertible with A R = A(AA H ) 1 being one right inverse; i.e. AA R = I m m Equivalent Conditions for Guaranteed Uniqueness A has full column rank; i.e. A has n linearly independent columns A H A is invertible A is left invertible with A L =(A H A) 1 A being one left inverse; i.e. A LA = I n n Equivalent Conditions for Guaranteed Existence and Uniqueness A is invertible (right and left invertible; nonsingular)

Least Square Problems Consider the problem: min x b Ax (35) where z def = z H z. This is used (primarily) when Ax = b does not possess a solution. Solution is given by so-called normal equations: A H Ax = A H b (36) Here, x need not be unique. The vector Ax will be unique. Ax will be the projection of b onto the range (column space) of A. Minimum Norm Problems Assume that Ax = b has a solution. Consider the problem: min x { x Ax = b }. (37) Here, we seek the minimum (or smallest) norm solution x. Application: minimizing control effort. It can be found by solving the system: for any v and letting While v need not be unique, x will be unique. Eigenvalues and Eigenvectors AA H v = b (38) x = A H v. (39) Ax = λx x 0 (40) This will help us understand natural modes of a dynamical system. Eigenvalues are roots of Eigenvectors are nonzero vectors v such that det(si A) =0. (41) (si A)v =0. (42) Matrix Exponential Assume that A has n linearly independent eigenvectors (we say that A is diagonalizable). We then have: e At = i e λit v i w H i (43) where V =[v 1...v n ], W = V 1, i th row of W is designated wi H. The column vectors v i are called right eigenvectors of A since Av i = λ i v i. The row vectors wi H are called left eigenvectors of A since wi HA = λ iwi H. Singular Value Decomposition Will help us understand input/output directionality properties of dynamical systems Has the form M = i σ i v i u H i (44) The σ i are the singular values of A. The column vectors v i are called right singular vectors of M and satisfy M H Mv i = σ 2 i v i. (45)

The row vectors u H i are called left singular vectors of M and satisfy MM H u i = σi 2 u i. (46) Also, Mv i = σ i u i (47) where vi Hv j =1ifi = j and zero if i j, u H i u j =1ifi = j and zero if i j, The v i form a unitary matrix; i.e. V 1 = V H. The u i form a unitary matrix; i.e. U 1 = U H. Moreover, σ i (M) = λ i (A H A) (48) σ max (M) = max v 0 Mv v Mv σ min (M) = min (50) v 0 v The above shows how minimum and maximum singular values can be used to quantify the maximum and minimum amplification properties of a matrix. Note the ordering of my language! Here is what is met: If σ min (M) is large, then we say that M amplifies vectors greatly. If σ max (M) is small, then we say that M attenuates vectors greatly. Note: A 2 2 (nonsingular) real matrix maps the unit circle onto an ellipse - 2σ max is the length of the major axis; 2σ min is the length of the minor axis. Modal Analysis Consider the unforced (zero input) dynamical system: (49) ẋ = Ax x(0) = x o (51) x(t) =e At x o (52) Assume that A has n linearly independent eigenvectors. In such a case, x(t) =e At x o = i (w H i x o )e λit v i (53) Moreover, if x o = v i then x(t) =e λit v i (54) This will give us a physical interpretation of poles (eigenvalues, natural modes). Transmission Zeros This is an energy absorbtion concept. To see this, consider the application of u(t) = sin(ω o t + θ) s to the stable LTI system H(s) = 2 +ω 2 o (s+1)(s+2)(s+3). Doing so yields a steady state output y ss = 0 because H(jω o ) = 0. See method of transfer function (MOTF) above. For MIMO systems, there are directionality issues as well. A dynamical system has a transmission zero at z o if there exists vectors u o and x o (not both zero) such that if u = u o e zot and x(0) = x o, then x = x o e zot and y = 0 for all t>0. Controllability (Existence Concept) Given arbitrary n-dimensional vectors x 1 and x f and an initial time t o, does there exist a control u and a finite time t f >t o to transfer the state from the initial condition x(t o )=x 1 to the final condition x(t f )=x f? When can we alter (move) all of the modes of a system via the control? Controllability Matrix, rank test Controllability Gramian, rank test PBH eigenvalue-eigenvector tests - gives insight into loss of controllability Construction of minimum energy state transferring control using Controllability Gramian.

Stabilizability When does there exist a stabilizing control law? When can we alter (move) the unstable modes of a system via the control? PBH eigenvalue-eigenvector tests Observability (Uniqueness Concept) Given knowledge of y and u, does there exist a finite time t f >t o such that x(t o ) can be determined uniquely? When can all of the modes of a system be observed through y? Observability Matrix, rank test Observability Gramian, rank test PBH eigenvalue-eigenvector tests - gives insight into loss of observability Construction of initial state using Observability Gramian. Detectability What goes here??? You tell me. When can we observe the unstable modes of a system via the output? PBH eigenvalue-eigenvector tests Pole-Zero Cancellations Can be used to explain loss of controllability and/or observability Consider the LTI systems H 1 = 1 s p 1, H 2 = s p1 s p 2 where p 1 p 2. The cascade system H 2 H 1 is unobservable since the mode p 1 is unobservable from the output. It is controllable from the input. The cascade system H 1 H 2 is uncontrollable since the mode p 1 is uncontrollable from the input. It is observable from the output. Full State Feedback Let in ẋ = Ax + Bu so that u = Gx + v (55) ẋ =(A BG)x + Bv (56) Typically, we wish to select the control gain matrix G R m n so that A BG is stable. See LQR method below. Pole Placement Concepts Uncontrollable modes cannot be moved via full state feedback. Controllability properties are invariant under full state feedback. Observability properties are NOT invariant under full state feedback. Model-Based State Observers/Estimators ˆx = Aˆx + Bu + H(y ŷ) (57) ŷ = C ˆx + Du (58) The matrix H R n p is referred to as the filter gain matrix. The above structure is often referred to as an output injection structure. Let x def = x ˆx denote the state estimation error. Combining the above with ẋ = Ax+Bu, y = Cx+Du, yields the following state estimation error dynamics: x =(A HC) x (59)

The matrix H R n p is generally selected so that A HC is stable. Unobservable modes cannot be moved via output injection. Observability properties are invariant under output injection. Controllability properties are NOT invariant under output injection. Model Based Compensators Combines full state feedback and model-based observer/estimation concepts to yield model-based compensator: K(s) =G(sI A + BG + H(C + DG)) 1 H (60) Separation Principle When K is inserted into a feedback loop with the design plant P =[A, B, C, D], the closed loop poles are precisely the roots of the following polynomials det(si A + BG) = 0 (61) det(si A + HC) = 0 (62) One is associated with full state feedback design. The other is associated with observer design. Hence the name separation principle. Internal Model Principle Examples of the internal model principle are as follows: need integrator 1 s within feedback loop to follow step reference commands r, need integrator within compensator to follow step input disturbances d i, need 1 s for ramps, 2 1 need s 2 +ω for sinusoids, etc. o 2 Linear Quadratic Regulator (LQR) Design Method Assuming that (A, B, M) is stabilizable and detectable minimize the quadratic cost functional J(u) def = 0 (z T (τ)z(τ)+u T (τ)ru(τ))dτ (63) where R = R T > 0 (positive definite) subject to the dynamical constraint ẋ = Ax + Bu x(0) = x o (64) z = Mx (65) Here, R is called the control weighting matrix. Q = M T M is called the state weighting matrix. Solution: where u(t) = Gx(t) (66) G = R 1 B T K (67) and K is the unique symmetric, at least positive semi-definite solution of the control algebraic Ricatti equation (CARE): 0=KA + A T K + M T M KBR 1 B T K (68) Given the above, A BG will be stable! LQR can yield other nice properties; e.g. robustness properties. To learn more, take EEE588 on Multivariable Control Design.