EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and should be used as a checklist or study guide as you proceed through the subject matter. While a few significant formulae and abstract concepts are presented, important details are supplied within the text. SISO Concepts (Pre-requisite Knowledge) Linearity, Time Invariance, Linear Time Invariant (LTI) Systems Laplace Transforms Basics; e.g. basic transform pairs, properties, partial fraction expansions, etc. Solution of ordinary differential equations (ODEs) via Laplace transforms; e.g. first, second, and third order odes Transfer Functions Poles (term used by engineers, mathematicians use the term eigenvalue) Complex Arithmetic Stability - all poles in open left half plane; i.e. Res<0 Step Response, Impulse Response Initial Condition Response (Zero Input Response) Forced Response and Convolution Zeros Frequency Response (Magnitude and Phase Response) Bode Plots Sinusoidal Steady State Analysis: For a stable LTI system H, if u(t) =A sin(ω o t + θ) y ss = A H(jω o ) sin(ω o t + θ + H(jω o )) (1) This will be referred to as the method of the transfer function (MOTF). It is arguably the most important result in the study of dynamical systems. Why? Because it is the basis for all system testing. Block Diagrams - series, parallel, and feedback interconnections Feedback System Concepts Classical Control Concepts; e.g. Root Locus, Bode, Nyquist Using MATLAB, Simulink, control system toolbox, robust control toolbox Modeling of Dynamical Systems Candidate Dynamical Systems car, inverted pendulum, standard pendulum, spring-mass-dashpot, aircraft; NOTE: You must become very familiar with a few dynamical systems in order to adequately relate the theory to reality. Nonlinear State Space Models ẋ = f(x, u) (2) y = g(x, u) (3) Controls u =[u 1... u m ] T, States x =[x 1... x n ] T, Outputs y =[y 1... y p ] T Above is shorthand for the following m-input p-output multiple-input multiple output (MIMO) system ẋ 1 = f 1 (x 1,...,x n,u 1,...,u m ) (4). ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6). y p = g p (x 1,...,x n,u 1,...,u m ) (7)
All systems are nonlinear. Most dynamical systems may be approximated by linear models - provided that signal/variable excursions are sufficiently small. This is the same idea we learn when we study Taylor series and linearization of nonlinear functions. Equilibria System equilibria are found by solving: f(x, u) =0 (8) This yields equilibrium pairs (x e,u e ). In general, this must be done using some numerical method; e.g. Newton s method. Linearization about Equilibria (x e,u e ) To linearize about the equilibrium (x e,u e ), we let We then proceed as described below. u = u e + δu (9) x = x e + δx (10) y = y e + δy (11) LTI State Space Models Linearization about the equilibrium (x e,u e ) yields the small-signal LTI dynamical system model δẋ = Aδx + Bδu (12) δy = Cδx + Dδu (13) with m controls δu, n states δx, p outputs δy (all representing small perturbations from equilibrium) and a ij = f i x j (xe,u e) c ij = g i x j (xe,u e) It should be noted that the estimate b ij = f i u j (xe,u e) d ij = g i u j (xe,u e) (14) (15) will be a good approximation to the nonlinear system state x when ˆx = x e + δx (16) the linear system initial condition satisfies δx(0) x(0) x e (this ensures that the estimate ˆx starts close to the nonlinear system s initial condition x(0)) and u u e (i.e. u is sufficiently close to u e ) Standard Convention It is standard convention to drop the small signal δ notation and just write ẋ = Ax + Bu (17) y = Cx + Du (18) where it is (implicitly) implied that u, x, and y represent small perturbations from equilibrium Transfer Function Matrix H(s) =C(sI A) 1 B + D (19) This is a p m matrix of transfer functions. When p = m, we say that the system is square. Impulse Response Matrix h(t) =Ce At B + Dδ(t) (20)
DC Gain Matrix H(0) = C( A) 1 B + D (21) Useful to determine steady state output to constant inputs. It is common to perform a singular value decomposition (see below) for this matrix to understand its input-output directionality properties. Response to Inputs (forcing functions) and Initial Conditions One can show that Taking Laplace transforms yields x(t) = e At x o + y(t) = Ce At x o + Note: For each of the above, we have t 0 t e A(t τ) Bu(τ)dτ (22) 0 Ce A(t τ) Bu(τ)dτ + Du(t) (23) X(s) = (si A) 1 x o +(si A) 1 BU(s) (24) Y (s) = C(sI A) 1 x o + H(s)U(s) (25) 1. a term due to initial conditions (zero input response) and 2. a term due to the forcing functions passing through H. This simple additive structure is a consequence of linearity. State Space Arithmetic series, parallel, and feedback interconnections Linear Algebra Square Matrices Determinants, Singular (non-invertible) matrices, Non-Singular (invertible) matrices Non-square Matrices Systems of Linear Algebraic Equations where A C m n. Key Issues: Existence and uniqueness Gaussian Elimination Solving Ax = b via elementary row operations This is critical for understanding existence and uniqueness issues Vector Spaces and Subspaces Generalizes notion of a plane and a line Ax = b (26) Spanning Set of Vectors (Addresses Existence) A set of vectors e i is said to spans a space S if for every x S, there exist constants c i such that x = i c ie i. Linearly Independent Set of Vectors (Addresses Uniqueness) A set of vectors e i is said to be linearly independent if the relationship i c ie i = 0 necessarily implies that all of the c i are zero. Equivalently, no vector is a non-trivial linear combination of the other vectors. Basis (Addresses Existence and Uniqueness) A set of vectors e i is said to be a basis for a space S if the vectors are linearly independent and they span S. Four Fundamental Subspaces Associated with a Matrix A
1. Range or Column Space of A R(A) def = { Ax x C n } (27) This represents the set of all possible vectors b C m that can be generated by A; i.e. the set of all possible column vectors. This is the number of linearly independent columns of A. 2. Row Space of A or Column Space of A H dim R(A) =rank(a) (28) R(A H ) def = { A H v v C m }; A H def = ĀT (29) This is essentially (modulo conjugate transposition) the set of all possible row vectors that can be generated by A. This is the number of linearly independent rows of A. 3. Right Null Space of A dim R(A H )=rank(a) (30) N(A) def = { x Ax =0} (31) This is the set of all column vectors that annihilate A from the right. Hence the name right null space. This represents the number of free variables in solving Ax = b. 4. Left Null Space of A or Right Null Space of A H dim N(A) =n rank(a) (32) N(A A ) def = { v A H v =0} (33) This is essentially (modulo conjugate transposition) the set of row vectors that annihilate A from the left. Hence the name left null space of A. dim N(A H )=m rank(a) (34) This is the number of constraints that b must satisfy for Ax = b to have a solution. General Solution Formed from Gaussian Elimination; Gaussian Elimination yields a basis for the 4 fundamental subspaces Equivalent Conditions for Guaranteed Existence A has full row rank; i.e. A has m linearly independent rows AA H is invertible A is right invertible with A R = A(AA H ) 1 being one right inverse; i.e. AA R = I m m Equivalent Conditions for Guaranteed Uniqueness A has full column rank; i.e. A has n linearly independent columns A H A is invertible A is left invertible with A L =(A H A) 1 A being one left inverse; i.e. A LA = I n n Equivalent Conditions for Guaranteed Existence and Uniqueness A is invertible (right and left invertible; nonsingular)
Least Square Problems Consider the problem: min x b Ax (35) where z def = z H z. This is used (primarily) when Ax = b does not possess a solution. Solution is given by so-called normal equations: A H Ax = A H b (36) Here, x need not be unique. The vector Ax will be unique. Ax will be the projection of b onto the range (column space) of A. Minimum Norm Problems Assume that Ax = b has a solution. Consider the problem: min x { x Ax = b }. (37) Here, we seek the minimum (or smallest) norm solution x. Application: minimizing control effort. It can be found by solving the system: for any v and letting While v need not be unique, x will be unique. Eigenvalues and Eigenvectors AA H v = b (38) x = A H v. (39) Ax = λx x 0 (40) This will help us understand natural modes of a dynamical system. Eigenvalues are roots of Eigenvectors are nonzero vectors v such that det(si A) =0. (41) (si A)v =0. (42) Matrix Exponential Assume that A has n linearly independent eigenvectors (we say that A is diagonalizable). We then have: e At = i e λit v i w H i (43) where V =[v 1...v n ], W = V 1, i th row of W is designated wi H. The column vectors v i are called right eigenvectors of A since Av i = λ i v i. The row vectors wi H are called left eigenvectors of A since wi HA = λ iwi H. Singular Value Decomposition Will help us understand input/output directionality properties of dynamical systems Has the form M = i σ i v i u H i (44) The σ i are the singular values of A. The column vectors v i are called right singular vectors of M and satisfy M H Mv i = σ 2 i v i. (45)
The row vectors u H i are called left singular vectors of M and satisfy MM H u i = σi 2 u i. (46) Also, Mv i = σ i u i (47) where vi Hv j =1ifi = j and zero if i j, u H i u j =1ifi = j and zero if i j, The v i form a unitary matrix; i.e. V 1 = V H. The u i form a unitary matrix; i.e. U 1 = U H. Moreover, σ i (M) = λ i (A H A) (48) σ max (M) = max v 0 Mv v Mv σ min (M) = min (50) v 0 v The above shows how minimum and maximum singular values can be used to quantify the maximum and minimum amplification properties of a matrix. Note the ordering of my language! Here is what is met: If σ min (M) is large, then we say that M amplifies vectors greatly. If σ max (M) is small, then we say that M attenuates vectors greatly. Note: A 2 2 (nonsingular) real matrix maps the unit circle onto an ellipse - 2σ max is the length of the major axis; 2σ min is the length of the minor axis. Modal Analysis Consider the unforced (zero input) dynamical system: (49) ẋ = Ax x(0) = x o (51) x(t) =e At x o (52) Assume that A has n linearly independent eigenvectors. In such a case, x(t) =e At x o = i (w H i x o )e λit v i (53) Moreover, if x o = v i then x(t) =e λit v i (54) This will give us a physical interpretation of poles (eigenvalues, natural modes). Transmission Zeros This is an energy absorbtion concept. To see this, consider the application of u(t) = sin(ω o t + θ) s to the stable LTI system H(s) = 2 +ω 2 o (s+1)(s+2)(s+3). Doing so yields a steady state output y ss = 0 because H(jω o ) = 0. See method of transfer function (MOTF) above. For MIMO systems, there are directionality issues as well. A dynamical system has a transmission zero at z o if there exists vectors u o and x o (not both zero) such that if u = u o e zot and x(0) = x o, then x = x o e zot and y = 0 for all t>0. Controllability (Existence Concept) Given arbitrary n-dimensional vectors x 1 and x f and an initial time t o, does there exist a control u and a finite time t f >t o to transfer the state from the initial condition x(t o )=x 1 to the final condition x(t f )=x f? When can we alter (move) all of the modes of a system via the control? Controllability Matrix, rank test Controllability Gramian, rank test PBH eigenvalue-eigenvector tests - gives insight into loss of controllability Construction of minimum energy state transferring control using Controllability Gramian.
Stabilizability When does there exist a stabilizing control law? When can we alter (move) the unstable modes of a system via the control? PBH eigenvalue-eigenvector tests Observability (Uniqueness Concept) Given knowledge of y and u, does there exist a finite time t f >t o such that x(t o ) can be determined uniquely? When can all of the modes of a system be observed through y? Observability Matrix, rank test Observability Gramian, rank test PBH eigenvalue-eigenvector tests - gives insight into loss of observability Construction of initial state using Observability Gramian. Detectability What goes here??? You tell me. When can we observe the unstable modes of a system via the output? PBH eigenvalue-eigenvector tests Pole-Zero Cancellations Can be used to explain loss of controllability and/or observability Consider the LTI systems H 1 = 1 s p 1, H 2 = s p1 s p 2 where p 1 p 2. The cascade system H 2 H 1 is unobservable since the mode p 1 is unobservable from the output. It is controllable from the input. The cascade system H 1 H 2 is uncontrollable since the mode p 1 is uncontrollable from the input. It is observable from the output. Full State Feedback Let in ẋ = Ax + Bu so that u = Gx + v (55) ẋ =(A BG)x + Bv (56) Typically, we wish to select the control gain matrix G R m n so that A BG is stable. See LQR method below. Pole Placement Concepts Uncontrollable modes cannot be moved via full state feedback. Controllability properties are invariant under full state feedback. Observability properties are NOT invariant under full state feedback. Model-Based State Observers/Estimators ˆx = Aˆx + Bu + H(y ŷ) (57) ŷ = C ˆx + Du (58) The matrix H R n p is referred to as the filter gain matrix. The above structure is often referred to as an output injection structure. Let x def = x ˆx denote the state estimation error. Combining the above with ẋ = Ax+Bu, y = Cx+Du, yields the following state estimation error dynamics: x =(A HC) x (59)
The matrix H R n p is generally selected so that A HC is stable. Unobservable modes cannot be moved via output injection. Observability properties are invariant under output injection. Controllability properties are NOT invariant under output injection. Model Based Compensators Combines full state feedback and model-based observer/estimation concepts to yield model-based compensator: K(s) =G(sI A + BG + H(C + DG)) 1 H (60) Separation Principle When K is inserted into a feedback loop with the design plant P =[A, B, C, D], the closed loop poles are precisely the roots of the following polynomials det(si A + BG) = 0 (61) det(si A + HC) = 0 (62) One is associated with full state feedback design. The other is associated with observer design. Hence the name separation principle. Internal Model Principle Examples of the internal model principle are as follows: need integrator 1 s within feedback loop to follow step reference commands r, need integrator within compensator to follow step input disturbances d i, need 1 s for ramps, 2 1 need s 2 +ω for sinusoids, etc. o 2 Linear Quadratic Regulator (LQR) Design Method Assuming that (A, B, M) is stabilizable and detectable minimize the quadratic cost functional J(u) def = 0 (z T (τ)z(τ)+u T (τ)ru(τ))dτ (63) where R = R T > 0 (positive definite) subject to the dynamical constraint ẋ = Ax + Bu x(0) = x o (64) z = Mx (65) Here, R is called the control weighting matrix. Q = M T M is called the state weighting matrix. Solution: where u(t) = Gx(t) (66) G = R 1 B T K (67) and K is the unique symmetric, at least positive semi-definite solution of the control algebraic Ricatti equation (CARE): 0=KA + A T K + M T M KBR 1 B T K (68) Given the above, A BG will be stable! LQR can yield other nice properties; e.g. robustness properties. To learn more, take EEE588 on Multivariable Control Design.