Linear Control Theory

Size: px
Start display at page:

Download "Linear Control Theory"

Transcription

1 Linear Control Theory (Lecture notes) Version 0.7 Dmitry Gromov April 21, 2017

2 Contents Preface 4 I CONTROL SYSTEMS: ANALYSIS 5 1 Introduction General notions Typical problems solved by control theory Linearization * Hartman-Grobman theorem Solutions of an LTV system Fundamental matrix State transition matrix Time-invariant case Controlled systems: variation of constants formula Controllability and observability Controllability of an LTV system * Optimality property of ū Observability of an LTV system Duality principle Controllability of an LTI system Kalman s controllability criterion Decomposition of a non-controllable LTI system Hautus controllability criterion Observability of an LTI system Decomposition of a non-observable LTI system Canonical decomposition of an LTI control system Stability of LTI systems Matrix norm and related inequalities Stability of an LTI system Basic notions * Some more about stability Lyapunov s criterion of asymptotic stability

3 4.2.4 Algebraic Lyapunov matrix equation Hurwitz stable polynomials Stodola s necessary condition Hurwitz stability criterion Frequency domain stability criteria Linear systems in frequency domain Laplace transform Transfer matrices Properties of a transfer matrix Transfer functions Physical interpretation of a transfer function Bode plot BIBO stability II CONTROL SYSTEMS: SYNTHESIS 44 6 Feedback control Introduction Reference tracking control Feedback transformation Pole placement procedure Linear-quadratic regulator (LQR) Optimal control basics Dynamic programming Linear-quadratic optimal control problem State observers Full state observer Reduced state observer APPENDIX 59 A Canonical forms of a matrix 60 A.1 Similarity transformation A.2 Frobenius companion matrix A.2.1 Transformation of A to A F A.2.2 Transformation of A to ĀF A.3 Jordan form Bibliography 65 3

4 Preface These lecture notes are intended to provide a supplement to the 1-semester course Linear control systems taught to the 3rd year bachelor students at the Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University. This course is developed in order to familiarize the students with basic concepts of Linear Control Theory and to provide them with a set of basic tools that can be used in the subsequent courses on robust control, nonlinear control, control of time-delay systems and so on. The main emphasis is put on understanding the internal logic of the theory, many particular results are omited, some parts of the proofs are left to the students as exercises. On the other hand, there are certain topics, marked with asterisks, that are not taught in the course, but included in the lecture notes because it is believed that these can help interested students to get deeper into the matter. Some of these topics are included in the course Modern control theory taught to the 1st year master students of the specialization Operations research and systems analysis. The lecture notes do not include homework exercises. These are given by tutors and elaborated during the weekly seminars. All exercises and examples included in the lecture notes are intended to introduce certain concepts that will be used later on in the course. 4

5 Part I CONTROL SYSTEMS: ANALYSIS 5

6 Chapter 1 Introduction 1.1 General notions We begin by considering the following system of first order nonlinear differential equations: { ẋ(t) = f(x(t), u(t), t), x( ) = x 0, (1.1) y(t) = g(x(t), u(t), t), where x(t) R n, u(t) R m, and y(t) R k for all t I, I {[, T ], [, )} 1 ; f(x(t), u(t), t) and g(x(t), u(t), t) are continuously differentiable w.r.t. all their arguments with uniformly bounded first derivatives, and u(t) is a measurable function. With these assumptions, system (1.1) has a unique solution for any pair (, x 0 ) and any u(t) which can be extended to the whole interval I. In the following we will say that x(t) is the state, u(t) is the input (or the control), and y(t) is the output. Below we consider these notions in more detail. State. The state is defined as a quantity that uniquely determines the future system s evolution for any (admissible) control u(t). We consider systems with x(t) being an element of a vector space R n, n {1, 2,...}. NB: Other cases are possible! For instance, the state of a time-delay system is an element of a functional space. Control. The control u( ) is an element of the functional space of admissible controls: u( ) U, where U can be defined, e.g., as a set of measurable, L 2 or L, piecewise continuous or piecewise constant functions from I to U R m, where U is referred to as the set of admissible control values. In this course we will assume that U = R m and U is the set of piecewise continuous functions. Definition Given (, x 0 ) and u(t), t I, x(t) is said to be the solution of (1.1) if x( ) = x 0 and if d dt x(t) = f( x(t), u(t), t) almost everywhere. We will often distinguish the following special cases: 1 Whether we will consider a closed and finite or a half-open and infinite interval will depend on the studied problem. For instance, the time-dependent controllability problem is considered on a closed interval while the feedback stabilization requires infinite time. 6

7 Uncontrolled dynamics. If we set u(t) = 0 for all t [, ) the system (1.1) turns into { ẋ(t) = f 0 (x(t), t), x( ) = x 0, y(t) = g 0 (x(t), t), (1.2) where f 0 (x, t) = f(x, 0, t), resp., g 0 (x, t) = g(x, 0, t). The dynamics of (1.2) depends only on the initial values x( ) = x 0. Time-invariant dynamics. Let f and g do not depend explicitly on t. Then (1.1) turns into { ẋ(t) = f(x(t), u(t)), x( ) = x 0, (1.3) y(t) = g(x(t), u(t)). The system (1.3) is invariant under time shift and hence we can set = Typical problems solved by control theory Below we list some problems which are addressed by control theory. 1. How to steer the system from point A (i.e., x( ) = x A ) to point B (x(t ) = x B ) Open-loop control. 2. Does the above problem always possess a solution? Controllability analysis. 3. How to counteract possible deviations from the precomputed trajectory? Feedback control. 4. How to get the necessary information about the system s state? Observer design. 5. Is the above problem always solvable? Observability analysis. 6. How to drive the system to the zero steady-state from any initial position? Stabilization. 7. And so on and so forth... many problems are beyond the scope of our course. 1.3 Linearization Typically, there are two ways to study a nonlinear system: a global and a local one. The global analysis is done using the methods from nonlinear control theory while the local analysis can be performed using linear control theory. The reason for this is that locally the behavior of most nonlinear systems can be well captured by a linear model. The procedure of substituting a nonlinear model with a linear one is referred to as the linearization. 7

8 Linearization in the neighborhood of an equilibrium point. The state x is said to be an equilibrium (or fixed) point of (1.1) if f(x, 0, t) = 0, t. One can consider also controlled equilibria, i.e. the pairs (x, u ) s.t. f(x, u, t) = 0, t. Let x be an equilibrium point of (1.1). Consider the dynamics of (1.1) in a sufficiently small neighborhood of x, denoted by U(x ). Let x(t) = x(t) x be the deviation from the equilibrium point x. We write the DE for x(t) expanding the r.h.s. into the Taylor series: d dt x(t) = f(x, 0, t)+ f(x, u, t) x x=x,u=0 x(t)+ f(x, u, t) u x=x,u=0 u(t)+h.o.t. 2 Introducing notation A(t) = x x=x f(x, u, t) and B(t) = u f(x, u, t) recalling that f(x, 0, t) and, finally, dropping the high-order terms we,u=0 x=x,u=0, get d x(t) = A(t) x(t) + B(t)u(t). (1.4) dt The equation (1.4) is said to be Linear Time-Variant (LTV). If the initial nonlinear equation was time-invariant, we had the Linear Time-Invariant (LTI) equation: d x(t) = A x(t) + Bu(t). (1.5) dt Note that the linearization procedure can be applied to the second equation in (1.1) as well, thus yielding y(t) = C(t) x(t)+d(t)u(t) in the LTV case or y(t) = C x(t)+du(t) in the LTI case (there could also be a constant term which can be easily eliminated by passing to ỹ(t) = y(t) g(x, 0, t)). Linearization in the neighborhood of a system s trajectory. Consider the timeinvariant nonlinear system (1.3). Let (x (t), u (t)) be the system s trajectory and the corresponding control. Denote δx(t) = x(t) = x (t) and δu(t) = u(t) u (t). The DE for δx(t) is d dt δx(t) = ẋ(t) ẋ (t) = f(x(t), u(t)) f(x (t), u (t)) = f(x, u) δx(t) + f(x, u) x u + H.O.T. (1.6) x=x (t),u=u (t)δu(t) x=x (t),u=u (t) Denoting A(t) = x x=x f(x, u) and B(t) = u f(x, u) and dropping the high-order terms we get an LTV system (t),u=u (t) x=x (t),u=u (t) (1.4). Note that even though the initial nonlinear system was time-invariant, its linearization around the system s trajectory (x (t), u (t)) is time-variant! 2 H.O.T. = high order terms. 8

9 1.3.1 * Hartman-Grobman theorem A justification for using linearized models is given by the Hartman-Grobman theorem which is based on the notion of a hyperbolic fixed point. Definition The equilibrium (fixed) point x is said to be hyperbolic if all eigenvalues of the linearization A(t) have non-zero real parts. Theorem (Hartman-Grobman). The set of solutions of (1.1) in the neighborhood of a hyperbolic equilibrium point x is homeomorphic to that of the linearized system (1.4) in the neighborhood of the origin. Quoting Wikipedia: The Hartman Grobman theorem... asserts that linearization our first resort in applications is unreasonably effective in predicting qualitative patterns of behavior. 9

10 Chapter 2 Solutions of an LTV system 2.1 Fundamental matrix Consider the set of homogeneous (i.e., uncontrolled) LTV differential equations: ẋ(t) = A(t)x(t), x( ) = x 0, (2.1) where x(t) R n, t [, T ]. A(t) is component-wise continuous and bounded. Proposition The set of all solutions of (2.1) forms an n-dimensional vector space over R. Definition A fundamental set of solutions of (2.1) is any set {x i ( )} n i=1 such that for some t [, T ], {x i (t)} n i=1 forms a basis of Rn. An n n matrix function of t, Ψ( ) is said to be a fundamental matrix for (2.1) if the n columns of Ψ( ) consist of n linearly independent solutions of (2.1), i.e., where Ψ(t) = [ ψ 1 (t)... ψ n (t) ]. Ψ(t) = A(t)Ψ(t), Note that there are many possible fundamental matrices. For instance, an n n matrix Ψ(t) satisfying Ψ(t) = A(t)Ψ(t) with Ψ( ) = I n n is a fundamental matrix. Example Consider the system [ ] 0 0 ẋ(t) = x(t). (2.2) That is, ẋ 1 (t) = 0, ẋ 2 (t) = tx 1 (t). The solution is: x 1 (t) = x 1 ( ), and x 2 (t) = 1 2 t2 x 1 ( ) 1 2 t2 0x 1 ( ) + x 2 ( ). [ ] [ ] [ ] x1 (0) 0 0 Let = 0 and ψ 1 (0) = =. Then ψ x 2 (0) 1 1 (t) =. Now let ψ 1 2 (0) = [ ] [ ] 2 0 we have ψ 2 (t) = t 2 =. Thus a fundamental matrix for the system is given by: 1 Ψ(t) = [ ] t [ ] 2. Then 0

11 Proposition Null space of a fundamental matrix is invariant for all t [, T ] and is equal to {0}. Corollary Given a fundamental matrix Ψ(t), its inverse Ψ 1 (t) exists for all t [, T ]. 2.2 State transition matrix Definition The state transition matrix Φ(t, ) associated with the system (2.1) is the matrix-valued function of t and which: 1. Solves the matrix differential equation Φ(t, ) = A(t)Φ(t, ), t [, T ], 2. Satisfies Φ(t, t) = I n n for any t [, T ]. Proposition Let Ψ(t) be any fundamental matrix of (2.1). Then Φ(t, τ) = Ψ(t)Ψ 1 (τ), t, τ [, T ]. Proof. We have Φ(, ) = Ψ( )Ψ 1 ( ) = I. Moreover, Φ(t, ) = Ψ(t)Ψ 1 ( ) = A(t)Ψ(t)Ψ 1 ( ) = A(t)Φ(t, ). Proposition The solution of (2.1) is given by x(t) = Φ(t, )x 0. Proof. The initial state is x( ) = Φ(, )x 0 = x 0. Next, we show that x(t) = Φ(t, )x 0 satisfies the differential equation: ẋ(t) = Φ(t, )x 0 = A(t)Φ(t, )x 0 = A(t)x(t). Lemma Properties of the state transition matrix: 1. Φ(t, t 1 )Φ(t 1, ) = Φ(t, ) semi-group property. 2. Φ 1 (t, ) = [ Ψ(t)Ψ 1 ( ) ] 1 = Ψ(t0 )Ψ 1 (t) = Φ(, t). 3. Φ(t0, t) = Φ(, t)a(t) (hint: differentiate Φ(, t)φ(t, ) = I). 4. If Φ(t, ) is the state transition matrix of ẋ(t) = A(t)x(t), then Φ T (, t) is the state transition matrix of the system ż(t) = A T (t)z(t) adjoint equation. t 5. det(φ(t, )) = e t tr(a(s))ds 0, where tr(a(t)) denotes the trace of matrix A(t). 6. If A(t) is a scalar, we have Φ(t, ) = e t A(s)ds 0 (NB: does not hold in general!). Example The state transition matrix corresponding to the fundamental matrix found in Example has the following form: 1 0 Φ(t, τ) = t 2 τ t

12 Exercise Check that the obtained state transition matrix defines solutions to (2.2). 2.3 Time-invariant case Consider the time-invariant differential equation: ẋ(t) = Ax(t), x( ) = x 0. (2.3) In this case, Φ(t, ) = Φ(t, 0) = Φ(t ) and Φ(t ) = AΦ(t ), Φ( ) = I. We can set = 0 and consider Φ(t). Matrix Exponential If A R n n, the state transition matrix is (note tha! = 1): Φ(t) = I + At A2 t = i=0 t i i! Ai = e At, where the series converges uniformly and absolutely for any finite t. Henceforth, e At will be referred to as the matrix exponential. Lemma Properties of the matrix exponential: 1. Ae At = e At A, that is A commutes with its matrix exponential. 2. ( e At) 1 = e At. 3. If P is a nonsingular [n n] matrix, then e P 1 AP = P 1 e A P (similarity transformation = change of the basis). 4. If A is a diagonal matrix, A = diag(a 1,..., a n ), then e A = diag(e a 1,..., e an ). 5. If A and B commute, i.e., AB = BA, we have e A+B = e A e B. Example (Harmonic motion). Consider the equation ] [ ] [ ] [ẋ1 (t) 0 ω x1 (t) = = Ax(t). ẋ 2 (t) ω 0 x 2 (t) The exponential matrix is thus: [ ] [ ] e At = + ωt [ ] 1 0 ω 2 t [ ] 0 1 ω 3 t ! [ ] 1 0 ω 4 t ! Taking into account that sin(x) = x x3 3! + x5 5! x7 x2 7! +... and cos(x) = 1 2! + x4 4! x6 6! +... we readily obtain: [ ] e At cos(ωt) sin(ωt) =, sin(ωt) cos(ωt) which is the rotation matrix that rotates the points of the Cartesian plane clockwise. 12

13 Exercise Using the result of the preceding example and the properties of the matrix exponential determine the matrix exponential e At for the matrix [ ] r φ A =. φ r Example (Matrix exponential of a Jordan block). Let the [m m] matrix J be of the form s s 1 0 J =......, 0 0 s s where s C. The matrix J can be written as J = si + U, where U is the upper shift matrix. First, we observe that I and U commute (as the identity matrix commutes with any square matrix). Thus we can write e Jt = e sit e Ut. Next, note that U is nilpotent, i.e., U m = 0. (NB: any matrix with zero main diagonal is nilpotent). Finally, we have e Jt = e st I m 1 i=0 t i i! U i. 2.4 Controlled systems: variation of constants formula Consider the LTV system ẋ(t) = A(t)x(t) + B(t)u(t), x( ) = x 0, (2.4) whose homogeneous (uncontrollable) solution is x(t) = Φ(t, )x 0. Theorem If Φ(t, ) is the state transition matrix for ẋ(t) = A(t)x(t), then the unique solution of (2.4) is given by t x(t) = Φ(t, )x 0 + Φ(t, s)b(s)u(s)ds. (2.5) Proof. Define the new variable z(t) = Φ(, t)x(t). Differentiating z(t) w.r.t. t we get ż(t) = Φ(, t)x(t) + Φ(, t)ẋ(t) = Φ(, t)a(t)x(t) + Φ(, t)a(t)x(t) + Φ(, t)b(t)u(t), 13

14 where the first two terms cancel. The resulting expression does not contain z(t) in the r.h.s. thus we can integrate it to get the solution: whence follows [ t x(t) = Φ 1 (, t) x 0 + t z(t) = z( ) + Φ(, s)b(s)u(s)ds, ] t Φ(, s)b(s)u(s)ds = Φ(t, )x 0 + Φ(t, s)b(s)u(s)ds. Corollary The solution of a linear time-invariant equation is given by t x(t) = e At x e A(t s) Bu(s)ds. (2.6) Example (Exponential input). Consider a (complex valued) 1 LTI system with zero initial conditions and a scalar exponential input signal e σt : ẋ(t) = Ax(t) + be σt, x( ) C, x(0) = 0 (2.7) where σ C. The solution of (2.7) is found using (2.6) : x(t) = t 0 e A(t τ) be στ dτ, which can be solved using integration by parts to get x(t) = (σi A) 1 (Ie σt e At )b. (2.8) Assume that the parameter σ is equal to an eigenvalue of A. This is referred to as the resonance. At first sight it seems that there is a singularity in the solution. To inspect this case more closely we rewrite (2.9) as ( ) x(t) = e At (σi A) 1 e (σi A)t I b. (2.9) and note that Z 1 (e Zt I) = t (Zt) k k=0 (k+1)!, which converges everywhere. Hence we conclude that the solution x(t) is well defined for all σ C. 1 We consider an LTI system in complex domain as we wish to include also harmonic input signals, e.g. u(t) = e iωt. This condition can be dropped if we assume that σ R. 14

15 Chapter 3 Controllability and observability 3.1 Controllability of an LTV system When approaching a control system a first step consists in determining whether the system can be controlled and to which extent? This type of problem is referred to as the controllability problem. To make this more concrete we consider the following problem statement. Two-point controllability. Consider the LTV system (2.4). Given initial state x 0 at time, find an admissible control u such that the system reaches the final state x 1 at time t 1. Solving this problem amounts to determining an admissible control ū(t) U, t [, t 1 ], (typically non-unique) that solves the following equation: t 1 x 1 = Φ(t 1, )x 0 + Φ(t 1, s)b(s)ū(s)ds. (3.1) Obviously, the two-point controllability problem is stated in a very limited way. We need a general formulation as defined below. Definition The system (2.4) defined over [, t 1 ] is said to be completely controllable (or just controllable) on [, t 1 ] if, given any two states x 0 and x 1, there exists an admissible control that transfers (x 0, ) to (x 1, t 1 ). Otherwise the system is said to be uncontrollable. Remark. Note that a system can be completely controllable on some interval [, t 1 ] and uncontrollable on [t 0, t 1 ] [, t 1 ]. However, it turns out that if a system is controllable on [, t 1 ] it will be controllable for any [t 0, t 1 ] [, t 1 ]. Exercise Prove that controllability on [, t 1 ] implies controllability on [t 0, t 1 ] [, t 1 ]. The LTV system is characterized by its structural elements, i.e. the matrices A(t) and B(t). In this sense we can speak about controllability of the pair (A(t), B(t)). Thus our goal will be to characterize the controllability properties of (2.4) in terms of (A(t), B(t)). 15

16 To do so we first transform (3.1) to a generic form. Denoting ˆx 1 = x 1 Φ(t 1, )x 0 we rewrite (3.1) as ˆx 1 = t 1 Φ(t 1, s)b(s)ū(s)ds, (3.2) which amounts to determining an admissible input ū(t) that transfers the zero state at to ˆx 1 at t 1. This problem is typically referred to as the reachability problem. One can easily see that for a linear system the controllability and the reachability problems are equivalent. Using (3.2) we can give the following characterization of two-point controllability. Proposition The pair (x 0, x 1 ) is controllable iff ( x 1 Φ(t 1, )x 0 ) belongs to the range of the linear map L(u), where L(u) = t 1 Φ(t 1, s)b(s)u(s)ds. (3.3) The above condition is particularly difficult to check as the map L is defined on the infinite-dimensional space of admissible controls U. We would prefer to have some finitedimensional criterion. Such criterion will be formulated below but first we present the following formal result. Lemma Let G(t) be an [n m] matrix whose elements are continuous functions of t, t [, t 1 ]. A vector x R n lies in the range space of L(u) = t 1 G(s)u(s)ds if and only if it lies in the range space of the matrix W (, t 1 ) = t1 G(s)G T (s)ds. Proof. (if) If x R(W (, t 1 )), then there exists η s.t. x = W (, t 1 )η. Take ū = G T η, then L(ū) = W (, t 1 )η = x and so, x R(L(ū)). (only if) Let there be x 1 / R(W (, t 1 )). Then there exists x 2 R (W (, t 1 )), i.e., x T 2 W (, t 1 ) = 0. Obviously, x 1 / R(W (, t 1 )) implies that x T 2 x 1 0. Suppose, ad absurdum, that there exists a control u 1 s.t. t 1 G(s)u 1 (s)ds = x 1. Then we have But, x T 2 W (, t 1 ) = 0 and so, t1 x T 2 W (, t 1 )x 2 = x T 2 G(s)u 1 (s)ds = x T 2 x 1 0. (3.4) t1 [ x T 2 G(s) ][ G T (s)x 2 ] ds = 0. Observe that x T 2 W (, t 1 )x 2 = t 1 G T (s)x 2 ds = 0 implies G(t) 0 for all t [, t 1 ], whence a contradiction of (3.4) follows. Now we can use the results of Proposition and Lemma to formulate the following fundamental theorem on controllability. 16

17 Theorem The pair (x 0, x 1 ) is controllable if and only if ( ) x 1 Φ(t, )x 0 belongs to the range space of W (, t 1 ) = t1 Φ(t 1, s)b(s)b T (s)φ T (t 1, s)ds. (3.5) Moreover, if η is a solution of W (, t 1 )η = ( x 1 Φ(t, )x 0 ), then ū(t) = B T (t)φ T (t 1, t) η is one possible control that accomplishes the desired transfer. The matrix W (, t 1 ) is called the controllability Grammian. The result of Theorem can be readily seen to encompass the complete controllability case. Corollary The system (2.4) is completely controllable on [, t 1 ] if and only if rank [ W (, t 1 ) ] = n. Properties of the controllability Grammian: 1. W (, t 1 ) = W T (, t 1 ); 2. W (, t 1 ) is positive semi-definite for t 1 ; 3. W (, t) satisfies the linear matrix differential equation (note that we fix and vary t 1 ): d dt W (, t) = A(t)W (t, t 1 ) + W (t, t 1 )A T (t) + B(t)B T (t), W (, ) = 0; 4. W (, t 1 ) satisfies the functional equation W (, t 1 ) = Φ(t 1, t)w (, t)φ T (t 1, t) + W (t, t 1 ). Note that some authors define the controllability Grammian as W (, t 1 ) = t 1 Φ(, s)b(s)b T (s)φ T (, s)ds. These two forms of the Gram matrix are congruent, i.e. they are related by the following transformation: W (t0, t 1 ) = Φ(, t 1 )W (, t 1 )Φ T (, t 1 ) * Optimality property of ū As Proposition states, the pair of states x 0 and x 1 is controllable on [, t 1 ] if ( x1 Φ(t 1, )x 0 ) belongs to the range of L(u), (3.3). In general, there is a set of controls U x0,x 1 = L 1( x 1 Φ(t 1, )x 0 ) U which satisfy this condition. That is to say, there are potentially infinitely many controls that bring the system from x( ) = x 0 to x(t 1 ) = x 1. However, it turns out that the control ū defined in Theorem enjoys a very particular property. 17

18 We assume for simplicity that the system (2.4) is completely controllable on [, t 1 ] and hence W (, t 1 ) is full rank. Then we can write ū(t) = B T (t)φ T (t 1, t)w 1 (, t 1 ) ( x 1 Φ(t, )x 0 ) Ux0,x 1. (3.6) We have the following result. Theorem For any v U x0,x 1 holds t1 ū(s) ds < t1 v(s) ds. (3.7) Proof. Since both ū and v belong to U x0,x 1, i.e., L(ū) = L(v) = ( x 1 Φ(t, )x 0 ), and due to linearity of L we have L(ū v) = t 1 (ū(s) Φ(t 1, s)b(s) v(s))ds = 0. Premultiplying the preceding expression by ( x 1 Φ(t, )x 0 ) T [ W 1 (, t 1 ) ] T we get t 1 ( x1 Φ(t, )x 0 ) T [ W 1 (, t 1 ) ] T Φ(t1, s)b(s) ( ū(s) v(s))ds = t 1 ū T (ū(s) (s) v(s))ds = 0, whence follows t 1 ū(s) 2 = t 1 ū(s), v(s) ds. To complete the proof we consider 0 < t 1 ū(s) v(s), ū(s) v(s) ds which implies as required. = t 1 ū(s), ū(s) ds 2 t 1 t 1 ū(s), ū(s) 2 ds < ū(s), v(s) ds + t 1 = t 1 t 1 v(s), v(s) ds ū(s), ū(s) ds + v(s), v(s) 2 ds t 1 v(s), v(s) ds, This result can be interpreted in the sense that the control ū(t) has the least energy among all controls steering the system (2.4) from x( ) = x 0 to x(t 1 ) = x 1. This result is a precursor of optimal control theory that we will touch upon later. 18

19 3.2 Observability of an LTV system When using feedback control it is crucial to be able to determine the system s state based upon the observed system s output. This is referred to as the observability problem. Observability Consider the LTV system ẋ(t) = A(t)x(t) + B(t)u(t), x( ) = x 0, y(t) = C(t)x(t). (3.8) Given an admissible input u(t) and the observed output function y(t), t [, t 1 ], find the initial value x 0 at t =. First, we note that the output of the system (3.8) can be written as t y(t) = C(t)Φ(t, )x 0 + Φ(t, s)b(s)u(s)ds (3.9) The last term in (3.9) depends only on the control u and can therefore be computed a priori for any u(t). To simplify the notation we set u(t) = 0 and consider the homogeneous system ẋ(t) = A(t)x(t), x( ) = x 0, (3.10) y(t) = C(t)x(t). The observability property can be reformulated as follows: given two states, x 0 under which conditions there exists t [, t 1 ] such that C(t)Φ(t, )x 0 C(t)Φ(t, )x This question can be answered by analyzing the null space of C(t)Φ(t, )x 0. and x 0, 0? First, consider the following technical lemma: Lemma Let H(t) be an [m n] matrix whose elements are continuous functions defined on the interval [, t 1 ]. The null space of the mapping O : R n C([, t 1 ], R m ) defined by O(x) = H(t)x coincides with the null space of the matrix Proof. If x N (M(, t 1 )), then x T M(, t 1 )x = M(, t 1 ) = t1 whence H(t)x = 0 for all t [, t 1 ]. t1 H T (s)h(s)ds. x T H T (s)h(s)xds = t1 H(s)x 2 ds = 0, With this result, we can formulate a theorem which is closely related to Theorem Theorem Consider the system (3.8). Let the matrix M(, t 1 ) be defined by Then we have two cases: M(, t 1 ) = t1 Φ T (s, )C T (s)c(s)φ(s, )ds. 19

20 1. M(, t 1 ) is non-singular. Then any initial state x 0 can be determined uniquely from the observed output y(t). are indistin- 2. M(, t 1 ) has a non-zero null space. Then any two points x 0 guishable if x 0 x 0 N (M(, t 1 )). and x 0 Proof. We write the expression for y(t) in (3.8) and multiply from the right by Φ T (t, )C T (t) to get Φ T (t, )C T (t)y(t) = Φ T (t, )C T (t)c(s)φ(s, )x 0. Integrating this from to t 1 yields t1 Φ T (s, )C T (s)y(s)ds = M(, t 1 )x 0. If M(, t 1 ) is non-singular, we find x 0 as t1 x 0 = M 1 (, t 1 ) Φ T (s, )C T (s)y(s)ds. To prove the second statement we suppose that x 0 x 0 N (M(, t 1 )). Then t1 y (s) y (s) 2 ds = t1 C(s)Φ(s, )x 0 C(s)Φ(s, )x 0 2 ds = (x 0 x 0) T [ t1 whence y (t) = y (t) for all t [, t 1 ]. ] Φ T (s, )C T (s)c(s)φ(s, )ds = (x 0 x 0) T M(, t 1 )(x 0 x 0) = 0, The matrix M(, t 1 ) is called the observability Grammian. (x 0 x 0) 3.3 Duality principle The duality principle formalizes our intuition about the similarity between the controllability and observability conditions for a linear system. Consider two systems: { ẋ(t) = A(t)x(t) + B(t)u(t) Σ : y(t) = C(t)x(t) and Σ : { ẋ(t) = A T (t) + C T (t)u(t) y(t) = B T (t)x(t). We have the following result: Theorem (Duality). System Σ is completely observable (controllable) iff the dual system Σ is completely controllable (observable). Proof. The proof is left to the reader as an exercise (Hint: use item ). of Lemma 20

21 3.4 Controllability of an LTI system From now on we will consider LTI systems, that is linear systems with constant coefficients: ẋ(t) = Ax(t) + Bu(t), x( ) = x 0, (3.11) y(t) = Cx(t), where A, B and C are matrices of appropriate dimensions. For the system (3.11) the controllability Grammian turns out to be W (t) = t 0 e A(t s) B(s)B T (s)e AT (t s) ds. (3.12) Exercise Show that the transpose operation and the matrix exponentiation commute Kalman s controllability criterion Theorem The system (3.11) is completely controllable iff rank [ B AB A 2 B... A n 1 B ] = n. (3.13) Proof. ( ): Let the system (3.11) be completely controllable. Assume that rank [ B AB A 2 B... A n 1 B ] = m < n. Then there exists a non-zero vector c such that c T [ B AB A 2 B... A n 1 B ] = 0 which is equivalent to c T B = 0, c T AB = 0,..., c T A n 1 B = 0. By the Cayley-Hamilton theorem this implies that c T A k B = 0 for all k 0 and hence c T e At B = 0 t 0. This implies that c belongs to the null space of W (t) and so, since c 0, rank W (t) < n. Finally, this means that the system (3.11) is not completely controllable which contradicts the assumption. ( ): Let rank [ B AB A 2 B... A n 1 B ] = n. Assume that the system is not controllable, i.e., rank W (t) < n. Then there exists a non-zero vector c such that W (t)c = 0, which implies that c T e At B = 0. Expanding the matrix exponential we get c T A i B = 0 for all i 0. However, this implies that which contradicts the assumption. rank [ B AB A 2 B... A n 1 B ] < n, 21

22 The matrix K C = [ B AB A 2 B... A n 1 B ] is called the Kalman s controllability matrix. The following proposition provides some intuition to the controllability criterion formulated above. First, we recall that a linear subspace S R n is invariant under the action of A (A-invariant), where A is an [n n]-matrix, if x S Ax S. We have the following: Proposition Let S R n be an invariant subspace of A and let rank B = m < n. Then det [ B AB A 2 B... A n 1 B ] 0 if and only if B / S. Proof. The proof is left to the reader as an exercise. Consider two examples. [ ] [ ] Example Let A = and B =. The matrix exponential is e At = [ ] 1 e t t [ ] e t and the controllability Grammian is W (t) = dτ. Note that rank W (t) = 1 for any t > 0. [ ] [ ] Example Let A = and B =. The matrix exponential is e At = [ ] cos(t) sin(t) and the controllability Grammian is sin(t) cos(t) W (t) = t 0 [ cos 2 (t) cos(t) sin(t) One can easily check that det W (t) = 1 4 The above illustrates the following important fact. ] [ ] cos(t) sin(t) sin 2 dτ = 1 2t + sin (2 t) 2 sin 2 (t). (t) 4 2 sin 2 (t) 2t sin (2 t) ( t 2 sin 2 (t) ) 0 for any t > 0. For an LTI system the property of being completely controllable does not depend on the considered time interval [0, T ] Decomposition of a non-controllable LTI system This subsection continues developing geometrical intuition about the controllability property of an LTI system which we briefly touched upon in Prop Let us again consider an [n n] matrix A and its invariant subspace S. Then there exists a complementary subspace S such that S S = R n. Let the system (3.11) fails to be completely controllable, i.e., rank [ B AB A 2 B... A n 1 B ] = m < n. 22

23 This means that the controllability matrix K C has exactly m independent columns. Consider the linear subspace generated by these columns: S = span(k C ). Then we have the following result. Lemma Subspace S = span(k C ) is A-invariant. Proof. Consider AS = span(ak C ) = span [ AB A 2 B A 3 B... A n B ]. The columns of first n 1 components of the matrix AK C belong to S. To show that the columns of A n B also belong to S we use the Cayley-Hamilton theorem and write A n B as A n B = (a n I + a n 1 A a 1 A n 1 )B, whence follows that the columns of A n B are linear combinations of vectors from S. Thus we have AS S. Let vectors (v 1, v 2,... v m ) form a basis of S. Consider a matrix T = [ v 1 v 2... v m ] T, where T is chosen such that T is non-singular. Lemma The matrix T defines a transformation such that the matrices A and B take the following form: [ A C = T 1 A11 A AT = 12 0 A 22 ], B C = T 1 B = [ ] B1. (3.14) 0 Proof. First, we show that AT = T A C. Let us rewrite this as follows: A [ ] [ ] [ ] A v 1 v 2... v m T = v1 v 2... v m T 11 A A 22 The vectors v i are A-invariant, thus the product A [ v 1 v 2... v m ] can be written as a linear combination of the same vectors v i. This implies that the matrix A 21 = 0. For B C, we need to check that B = T B C. This follows from the fact that span (B) S Hautus controllability criterion An alternative way to check the controllability of an LTI system is given by Hautus controllability condition as detailed below. Theorem The system (3.11) is completely controllable if and only if rank(si A, B) = n for all s C. (3.15) Proof. ( ): Assume that there exists an s 0 C such that rank(s 0 I A, B) < n. Then there exists a non-zero vector c for which c T (s 0 I A, B) = 0 c T A = s 0 c T and c T B = 0. 23

24 From the last equalities we have c T A k B = s k 0 ct B = 0 for all k 1, which implies that c T [ B AB... A k B ] = 0. Hence, (3.11) is not completely controllable. ( ): Assume that the system (3.11) is not completely controllable. This implies that the controllability subspace S = span(c) has dimension less than n, i.e., dim S < n. Let S be the orthogonal complement of S, i.e., for any v S and w S we have v T w = 0. We can observe that S is A T -invariant, i.e., A T S S as folows from v T A T w = (Av) T w=0. The latter equality follows from the fact that S is A-invariant, whence Av S. It is known that every invariant subspace contains at least one eigenvector s, i.e., there exist λ C and s S such that A T s = λs or, equivalently, s T (A λi) = 0. However, we also have s T B = 0 for any s S. Thus (3.15) does not hold. Remark. Note that the Hautus condition can be simplified even further by requiring that (3.15) holds for all s Λ(A) as for s / Λ(A) we have rank(si A) = n and (3.15) is satisfied trivially. 3.5 Observability of an LTI system The observability conditions can be derived from controllability conditions by using the duality principle. In particular, we have the following counterparts of the Kalman and Hautus controllability criteria discussed in the previous section. Theorem (Kalman s observability criterion). The system (3.11) is completely observable iff C rank CA = n. (3.16) CA n 1 Proof. We recall that the observability of (3.11) is equivalent to the controllability of { ẋ(t) = A T (t) + C T (t)u(t) y(t) = B T (t)x(t). Thus we write the observability condition rank [ C T A T C T ( A T ) n 1 C T ] = rank C CA CA n 1 = n. Note that the rank of a matrix is invariant under transposition and elementary row operations (row switching, multiplication, addition). The matrix K O = C CA CA n 1 will be referred to as the Kalman s observability matrix. 24

25 Theorem (Hautus observability criterion). The system (3.11) is completely observable if and only if [ ] si A rank = n for all s C. (3.17) C Proof. Using the duality principle we write rank [ si + A T C T ] [ ] si A = rank = n C for all s C. (3.18) In the first equality we transposed the fist matrix, multiplied the first n rows by 1 and used the fact that s is arbitrary Decomposition of a non-observable LTI system Similarly to the decomposition into controllable and uncontrollable subsystems, a nonobservable linear time-invariant system can be decomposed into an observable and unobservable subsystem as stated below. Consider an uncontrollable LTI system, i.e., the system (3.11) such that dim K O = m < n. Let vectors w1 T, wt 2,... wt m form a basis of span(ko T ). Consider a matrix T = [ w1 T w2 T... wm T T ] T, where T is chosen such that T is non-singular. Lemma The matrix T defines a transformation such that the matrices A and B take the following form: [ ] A O = T AT 1 A11 0 =, C O = CT 1 = [ C A 21 A 1 0 ]. 22 Proof. To prove this Lemma we first show that span(k T O ) is AT -invariant. Next, we show that 1 A T T T = T T (A O ) T which is equivalent to A T [ w1 T w2 T... wm T T ] = [ w1 T w2 T... wm T T ] [ A T 11 A T ] 21 0 A T. 22 The rest is left to the reader as an exercise. 3.6 Canonical decomposition of an LTI control system Consider a general linear control system (3.11) such that rank K C = k 1 n and rank K O = k 2 n. The vector space R n can be decomposed in two ways: as a direct sum of the controllable and uncontrollable subspaces, R n = C C, where C = span(k C ), dim C = k 1 or as a direct sum of the observable and unobservable 1 This a little messy notation is the price that we have to pay for staying with columns and column spaces. 25

26 subspaces, R n = O O, where O = span(k O ), dim O = k 2. Note that some of these subspaces can be trivial. Without going into too much detail we formulate the following result. Theorem There exists a non-singular [n n]-matrix T such that A 11 0 A 13 0  = T 1 AT = A 21 A 22 A 23 A A 33 0, ˆB = T 1 B = 0 0 A 43 A 44 Ĉ = CT = [ C 1 0 C 3 0 ]. B 1 B 2 0 0, (3.19) The transformed system is composed of four subsystems: (A 11, B 1, C 1 ) controllable, observable; (A 22, B 2, 0) controllable, unobservable; (A 33, 0, C 3 ) uncontrollable, observable; (A 44, 0, 0) uncontrollable and unobservable. Note that these subsystems are not fully decoupled as the off-diagonal matrices A 13, A 21 etc. can be non-zero. However, when studying structural properties of the system we can neglect these coupling erms as they do not influence the controllability/observability properties. Note that the second block-column of  is all zero except A 22; the third row is all zero except A 33. Exercise Explain why the subsystem (A 44, 0, 0) is uncontrollable and unobservable despite A 24 0 and A

27 Chapter 4 Stability of LTI systems 4.1 Matrix norm and related inequalities A vector space X is said to be normed if for any x X there exists a nonnegative number x which is called a norm and which satisfies the following conditions: 1. x 0 and x = 0 x = 0, 2. ax = a x for any x X and a R, 3. x + y x + y (triangle inequality). In the following we will consider X = R n and will use the Euclidean norm x = n x 2 i. i=1 Note that all presented results remain valid if we consider X = C n with a change of the Euclidean norm to the Hermitian one. For any Q R m n, the Euclidean norm induces a matrix norm (A.K.A. spectral norm) Q = max x 0 Qx x = max Qx = max x T Q T Qx. x =1 x =1 Consider the matrix Q T Q. This matrix is nonnegative and symmetric. Thus all its eigenvalues are nonnegative and real. The square roots of the respective eigenvalues are called the singular values of the matrix Q. Lemma If Q R m n, then Q = Q T = λ n, where λ n is the largest eigenvalue of Q T Q. Also, λ n is equal to the largest eigenvalue of QQ T. Proof. First, we consider the following optimization problem: µ 2 = max x T Q T Qx. (4.1) x T x=1 27

28 The corresponding Lagrange function is L(x, λ) = x T Q T Qx + λ(1 x T x) and the necessary optimality conditions are L x = 2QT Qx 2λx = 0, L λ = 1 xt x = 0. We see that the extremal value of x, denoted by x is such that x 0. The first equation implies that x is an eigenvector of Q T Q and λ is the corresponding eigenvalue. Substituting x into (4.1) we get µ 2 = max x T Q T Qx = x T x=1 whence µ = λ n. max x Q T Qx = (x ) T x =1 max λ(x ) T x = λ n, (x ) T x =1 Next, we observe that if λ is a nonzero eigenvalue of Q T Q then it is also an eigenvalue of QQ T. If Q T Qx = λx then QQ T (Qx) = λ(qx). Thus we conclude that Q = Q T. Lemma If matrix Q is block-diagonal, i.e., Q Q = , 0 0 Q k we have Q = max i=1,...,k Q i. Proof. We note that and so, µ = max λ = max λ Λ(Q T Q) i=1,...,k Q T 1 Q T Q Q = Q T k Q k max λ i Λ(Q T i Q i) λi = max i=1,...,k Q i. Basic properties of the induced matrix norm. If A and B are real matrices then 1. A 0 and A = 0 A = 0, 2. aa = a A for any a R, 3. A + B A + B, 4. Ax A x, 5. AB A B, 6. A = A T. To prove the 4-th and the 5-th properties we will use the definition of the spectral norm: AB = max x 0 A = max x 0 ABx x = max x 0 Ax x where in the last construction we denoted y = Bx. Ax A A x Ax x Ay Bx y x max Ay y 0 y max Bx AB A B, x 0 x 28

29 Matrix exponential and its estimates. Let A R n n. Consider the matrix exponential e At. We have the following results. Lemma e At e A t for all t 0. Proof. Using the properties of the induced matrix norm we have: e At t i i! Ai i=0 i=0 t i i! A i e A t. Theorem Given an [n n] real matrix A, let α = ɛ > 0 there exists γ(ɛ) 1 such that for any t 0 holds e At γ(ɛ)e (α+ɛ)t. max R(λ). Then for any λ Λ(A) Proof. The proof is based on the fact that any square matrix A can be transformed to a Jordan (block-diagonal) form by a similarity transformation: A = P JP 1. It follows that e At = P e Jt P 1 and hence, e At P e Jt P 1. The proof is concluded by considering the norms e Jit, where J i are the elementary blocks of the Jordan matrix. 4.2 Stability of an LTI system Basic notions In this and the subsequent sections we will consider the stability property of an uncontrolled LTI system. That is to say, we will set u = 0 and will consider the homogeneous system ẋ(t) = Ax(t), x(0) = x 0. (4.2) Definition An LTI system (4.2) is said to be uniformly stable if for any x 0 there exists a constant γ 1 such that the respective solution x(t, x 0 ) to the system (4.2) satisfies x(t, x 0 ) γ x 0 for any t 0. Definition An LTI system (4.2) is said to be exponentially stable if for any x 0 there exist constants γ 1 and σ > 0 such that the respective solution x(t, x 0 ) to the system (4.2) satisfies x(t, x 0 ) γe σt x 0 for any t 0. Theorem System (4.2) is exponentially stable iff all eigenvalues of A have strictly negative real parts. 29

30 Proof. ( ): Suppose that there exists an eigenvalue λ Λ(A) such that R(λ) 0. Let v be the corresponding eigenvector, then the system has a particular solution x(t) = e λt v. The norm of x(t) does not decrease to 0 with time, thus the system is not asymptotically stable. ( ): Suppose that all eigenvalues of (4.2) lie in the open left half plane of the complex plane and α is the real part of the rightmost eigenvalue. Let ɛ > 0 be a sufficiently small positive constant s.t. α + ɛ < 0. Then according to Theorem we have e At γe (α+ɛ)t and so, x(t, x 0 ) e At x 0 γe (α+ɛ)t x 0. Corollary System (4.2) is exponentially stable iff all roots of the characteristic polynomial det(si A) = s n +a 1 s n a n 1 s+a n have strictly negative real parts. Such polynomials are said to be Hurwitz stable * Some more about stability Multiple eigenvalues. To warm up for this little excursion, the reader is asked to prove the following standard result. Exercise Show that if for the matrix A λ is a real eigenvalue with eigenvector v, then there is a solution to the system (4.2) of the form x(t) = ve λt. λ = a ± ib is a complex conjugate pair with eigenvectors v = u ± iw (where u, w R) then x 1 (t) = e at (u cos bt w sin bt) and x 2 (t) = e at (u sin bt + w cos bt) are two linearly-independent solutions. We can thus observe that the real part of λ is the key ingredient in determining stability (as we already mentioned in the preceding section). However, there are some exceptions. When multiple eigenvalues exist and there are not enough linearly-independent eigenvectors to span R n, the solutions behave like x(t) t k e R(λ)t, where (k 1) does not exceed the algebraic multiplicity of the root λ. This implies that a solution can have a large overshoot, but it will eventually decay if λ < 0. Note that an overshoot may occur even for simple eigenvalues. [ ] 2 α Example Consider the system (4.2) with the matrix A =. Its eigenvalues are 1 and 2, and the respective eigenvalues are v 1 = and v 0 1 [ ] [ ] 1 α 0 2 =. 1 The eigenvalues are real negative, hence the system is stable. However, taking x(0) = v 2 αv 1, the first coordinate x 1 (t) = α(e t e 2t ) initially grows from zero to a maximum value of α/4 at t = ln(1/2). For sufficiently large α, the trajectory first moves far away from the fixed point before approaching it as t. Finally, consider the matrix A such that there is one eigenvalue λ, R( λ) = 0, of multiplicity greater than 1. Two cases are possible: 30

31 1. The geometric multiplicity of λ is less than its algebraic multiplicity, i.e., the eigenvalues associated with λ form a linear subspace of dimension less than the algebraic multiplicity of λ. Then, there exists a vector ṽ (called the generalized eigenvector) such that the respective solution will be of form x(t) = ṽt (depending on the difference between the algebraic and the geometric multiplicities there could be terms involving t k with k > 1). 2. If the geometric and the algebraic multiplicities of λ coincide, that is λ is a semisimple eigenvalue, the matrix A can be transformed to a diagonal one and hence all solutions either will be constants or will oscillate with constant amplitude depending on the imaginary part od λ. The former solution is unstable, while the latter one is uniformly stable (but obviously not exponentially stable). Stability of time-variant systems. In our course we do not consider stability of time-variant systems. In general, LTV systems behave similar to LTI ones, but some more caution is needed when defining the exponential stability property as the following example demonstrates. Example Consider the system ẋ = xe t. It satisfies the condition of Definition with γ = 1. However, the system does not converge to x(t) = 0 as t since the right-hand side decreases faster than exponentially and hence x(t) converges to some finite non-zero value 0 < x < x( ). To overcome this difficulty we may extend Definition by requiring that there exist constants γ l < γ and σ l > σ such that γ l e σ lt x 0 x(t;, x 0 ) γe σt x 0. The left inequality effectively prevents x(t;, x 0 ) from decreasing faster than exponentially Lyapunov s criterion of asymptotic stability Definition A quadratic form v(x) = x T V x, V = V T, is said to be positive definite (positive semi-definite) if v(x) > 0 (v(x) 0) for any x R n \ {0}. Since V = V T, all eigenvalues of V are real. Furthermore, a quadratic form is positive definite if and only if all its eigenvalues are strictly positive. This follows, for instance, from the following estimate: λ min x 2 v(x) λ max x 2. We have the following result. Theorem The system (4.2) is exponentially stable if and only if there exist two positive definite quadratic forms v(x) = x T V x and w(x) = x T W x such that the following identity holds along the solutions of (4.2): d v(x(t)) = w(x(t)). (4.3) dt 31

32 The quadratic form v(x) satisfying (4.3) is called the quadratic Lyapunov function. Proof. Necessity: Let the system (4.2) be exponentially stable. We choose a positive definite quadratic form w(x) = x T W x and show that there exists a positive definite quadratic form v(x) s.t. (4.3) holds. Integrating (4.3) from t = 0 to t = T we get T v(x(t, x 0 )) v(x(0, x 0 )) = w(x(t, x 0 ))dt. 0 Since the system is exponentially stable we have lim 0. Thus, v(x 0 ) = 0 w(x(t, x 0 ))dt = 0 T x(t ) = 0 and so, lim v(x(t )) = T x T 0 e AT t W e At x 0 dt. The last integral converges as w(x(t, x 0 )) λ max (W )γ 2 e 2σt x 0 2, t 0. Furthermore, this integral is a quadratic form in x 0 with V = 0 e AT t W e At dt. Positive definiteness of V follows from the estimate x T (t, x 0 )W x(t, x 0 ) λ min (W ) x(t, x 0 ) 2, which implies that v(x 0 ) = 0 x T (t, x 0 )W x(t, x 0 )dt λ min (W ) 0 x T (t, x 0 ) 2 dt. Sufficiency: Let σ > 0 be such that for any x R n 2σv(x) w(x). In particular, we can choose σ = λ min(w ) 2λ max(v ). Then we can write d v(x(t)) = w(x(t)) 2σv(x(t)), t 0. dt This inequality implies that v(x(t)) e 2σt v(x 0 ), and hence, λ min (V ) x(t, x 0 ) 2 v(x(t, x 0 )) e 2σt v(x 0 ) λ max (V )e 2σt x 0 2. The above can be rewritten to yield an exponential estimate on the solution of (4.2): where γ = λmax(v ) λ min (V ) x(t, x 0 ) γe σt x 0, and σ is as defined above. 32

33 4.2.4 Algebraic Lyapunov matrix equation Let v(x(t)) = x T (t)v x(t) be a quadratic form. v(x(t)) taken in virtue of (4.2): d dt We write explicitly the derivative d ( x T V x ) = ẋ T V x + x T V ẋ = x T A T V x + x T V Ax = x T (A T V + V A)x. dt Checking condition (4.3) of Theorem thus boils down to solving the following algebraic Lyapunov matrix equation: A T V + V A = W. (4.4) Theorem The algebraic Lyapunov matrix equation (4.4) has a unique solution iff the spectrum of the matrix A does not contain an eigenvalue s 0 such that s 0 also belongs to the spectrum of A. Proof. Necessity: Let there exists s 0 Λ(A) such that s 0 Λ(A). Then there exist two non-zero vectors b and c such that ba = s 0 b and ca = s 0 c. This implies that the homogeneous equation A T V + V A = 0 has a non-zero solution given by V = c T b. This contradicts the consistency condition add details We will refer to the condition above (given A, s 0 C s.t. s 0 Λ(A) and s 0 Λ(A)) as the Lyapunov condition. Corollary Let the matrix A satisfy the Lyapunov condition. Then for any symmetric W the unique solution of (4.4) is also a symmetric matrix. Corollary If the matrix W is positive definite and all eigenvalues of A lie in the open left half plane of C, then the corresponding matrix V is also positive definite. Lemma Let the system (3.11) be observable (controllable). The matrix A has eigenvalues in the open left half plane of C if and only if the Lyapunov matrix equation for the observable case, or A T V + V A = C T C A T V + V A = BB T for the controllable case has a unique positive definite solution V. 4.3 Hurwitz stable polynomials In this section we will study characteristic polynomials p(s) = det(si A) = s n + a 1 s n a n 1 s + a n, (4.5) where s C and a i R, i = 1,..., n. Definition The polynomial p(s) is said to be Hurwitz stable if all roots of p(s) have strictly negative real parts. 33

34 4.3.1 Stodola s necessary condition Theorem Let polynomial p(s) be Hurwitz stable, then all coefficients a i are strictly positive Hurwitz stability criterion Consider the polynomial (4.5), the [n n] matrix a 1 a 3 a a 2 a 4.. H =. 0 a 1 a a n a n 2 a n is called the Hurwitz matrix corresponding to the polynomial p. Theorem A real polynomial is Hurwitz stable polynomial if and only if all the leading principal minors of the matrix H(p) are positive. Example Consider a polynomial of degree 3: p(s) = s 3 + a 1 s 2 + a 2 s + a 3. This polynomial is Hurwitz stable if 1 (p) = a 1 = a 1 > 0 2 (p) = a 1 a 3 a 0 a 2 = a 2 a 1 a 0 a 3 > 0 a 1 a 3 a 5 3 (p) = a 0 a 2 a 4 0 a 1 a 3 = a 3 2 a 1 (a 1 a 4 a 0 a 5 ) > Frequency domain stability criteria The frequency based stability analysis is based upon the use of the argument principle. Before we proceed to the main results we note that in contrast to the classical complex analysis where one uses the principal value, i.e., a multivalued function Arg(s) defined in ( π, π], we will employ a more general notion of argument. Namely, we will assume that along any parametrized curve a(ω) C, ω (, ) the argument function arg(a(ω)) is a single valued and continuous function of ω. Note also that we will use in the following the terms argument and phase, resp. change of argument and phase shift interchangeably. Theorem (Cauchy). If f(s) is a meromorphic function defined on the closure of a simply connected open set S, and f(s) has no zeros or poles on the boundary S, then N P = 1 2πi S df f = 1 2π S arg[f(s)] 34

35 where N and P denote the number of zeros and poles of f(s) on the set S, with each zero and pole counted taking into account their multiplicity and order, respectively; S arg[f(s)] is the net change of the argument of f(s) as s traverses the contour S in the counterclockwise direction. Since we wish to distinguish between the left and the right roots of the characteristic polynomial p(s), we will concentrate on the case where f(s) = p(s) and S is chosen to be the open left half-plane of the complex plane. The latter implies that the boundary S coincides with the imaginary axis. For this case, Theorem can be used to prove the following result. Corollary Let polynomial p(s) be of degree n and have no zeros on the imaginary axis, then the net change of the argument function j arg[p(s)] = (2ν n)π, j where ν is the number of zeros of p(s) in the left half-open plane of C. Figure 4.1: Contour C encircling the left open half-plane of the complex plane C as R. Proof. Let S be chosen as shown in Fig It consists of the imaginary axis and the arc of infinite radius connecting (0, j ) and (0, j ). The total change of argument along the curve S = C is equal to 2πν, where ν is the number of zeros within C, i.e., the number of left roots of p(s). On the other hand, we have j j S arg[p(s)] = arg[p(s)] + arg[p(s)], where the first summand correspond to the change of argument along the imaginary axis, while the second one is computed along the infinite radius arc. This implies j j arg[p(s)] = 2πν arg[p(s)]. j When going from (0, j ) to (0, j ), the change of the argument of p(s) is determined by the behavior of the higher term which dominates the expression since the magnitude of the complex vector p(s) is infinite. We have s n r n (s)e nφ(s) and so, a turn of s by π corresponds to the change of the argument of s n by nπ. Substituting the obtained value into the preceding expression we get the required. 35 j j j

36 Note that the argument principle can alternatively be formulated without resorting to the Cauchy theorem. We formulate it as a lemma and give a different proof. Lemma Let polynomial p(s) be of degree n and have no zeros on the imaginary axis, then the net change of the argument function j arg[p(s)] = (2ν n) π 2, where ν is the number of zeros of p(s) in the left half-open plane of C. 0 Proof. Note that p(s) can be represented as a product of elementary polynomials: p(s) = k (s + λ i ) i=1 k+l j=k+1 (s + λ j )(s + λ j ), where k + 2l = n and λ j denotes the complex conjugate of λ j. The argument of p(s) is the sum of arguments of the elementary polynomials. Thus it suffices to consider these polynomials as s changes from (0, 0) to (0, j ) (note that we consider only the upper half of the imaginary axis). One can easily observe that each root with a negative real part contributes π 2 to the total change of the argument while the contribution of any root with a positive real part is equivalent to π 2 (the details are left to the reader as an exercise). Thus we have j arg[p(s)] = ν π 2 (n ν)π 2 = (2ν n)π 2. 0 Figure 4.2 illustrates the behavior of a real polynomial as s changes along the upper half of the imaginary axis. One can observe different phase shifts introduced by left and right roots. Also, Fig. 4.2d illustrates the situation where there is a pair of complex conjugate roots on the imaginary axis. Consider now the complex plane C and an open set S C. The set S, its boundary S and the complement of S, S C = C \ S form a partition of the complex plane, i.e., S S S C = C, S S C =. Consider the family of polynomials p(λ, s) such that 1. p(λ, s) has a fixed degree n for any λ [0, 1], 2. p(λ, s) continuously depend on λ. This implies that we can represent this family as a monic polynomial (note that p(λ, s) is of fixed degree) p(λ, s) = s n + a 1 (λ)s n a n 1 (λ)s + a n (λ), where a i (λ), i = 1,..., n, are continuous functions of λ on [0, 1]. Theorem (Boundary crossing theorem). Let p(0, s) have all its roots in S and p(1, s) have at least one root in S C. Then, there exists at least one λ (0, 1) such that 36

37 (a) Λ = { 1, 5, 10, 20} (b) Λ = {1, 3, 5, 15} (c) Λ = { 1, 2, 7, 10} (d) Λ = { 1, 5, 7j, 7j} Figure 4.2: Hodographs of a fifth degree polynomial with different sets of roots Λ. The magnitude of the complex vectors is plotted in the logarithmic scale: re jφ log 10 (r) e jφ. 1. p( λ, s) has all its roots in S S, and 2. p( λ, s) has at least one root on S. We return to the polynomial (4.5) and observe that it can be represented as a sum of two polynomials, referred to as even and odd parts: p even (s) = a 0 + a 2 s 2 + a 4 s p odd (s) = a 1 s + a 3 s 3 + a 5 s (4.6) We substitute s with jω and define p e (w) = p even (jω) = a 0 a 2 ω 2 + a 4 ω 4... p o (w) = podd (jω) jω = a 1 a 3 ω 2 + a 5 ω 4... (4.7) With this, we can evaluate (4.5) along the curve that coincides with the imaginary axis and is parametrized by ω: p(ω) = (jω) n + a 1 (jω) n a n 1 (jω) + a n = p e (ω) + jω p o (ω). (4.8) 37

38 Note that p e (ω) and p o (ω) are both polynomials in ω 2 and thus their roots are symmetric with respect to the origin of C. This implies that one can consider the roots corresponding to either the negative or the positive values of the parameter ω. We will choose the latter. Note also that the degree of p e is either (n 1) or n depending on whether n is odd or even. Respectively, the degree of p o is equal to either (n 1) or (n 2). Definition A polynomial p(ω), (4.8), satisfies the interlacing property if 1. The leading coefficients of p e (ω) and p o (ω) are of the same sign, and 2. All the roots of p e (ω) and p o (ω) are real, distinct, and the positive roots interlace in the following manner: 0 < ω e,1 < ω o,1 < ω e,2 < ω o,2 <... Remark. Definition can be alternatively formulated for p even (s) and p odd (s). In particular, the second condition would read: All the roots of p even (s) and p odd (s) are purely imaginary, distinct, and the roots lying on the upper half of the imaginary axis interlace in the following manner: j0 < jω e,1 < jω o,1 < jω e,2 < jω o,2 <... Note that j0 is now a root of p odd (s). Theorem (Hermite-Biehler). A real polynomial p(s), (4.5), is Hurwitz stable if and only if it satisfies the interlacing property. Proof. We prove only necessity: Consider a real (monic) Hurwitz polynomial of degree n: p(s) = s n + a n 1 s n a 1 s + a 0. According to the Stodola necessary condition, all coefficients a i of p(s) are positive. Thus, the first part of Def is satisfied. Furthermore, we know from Lemma that the phase of a stable polynomial p(jω) strictly increases from 0 to nπ/2 as ω goes from 0 to. Thus the vector of p(jω) starts at (a 0, j0), a 0 > 0, for ω = 0 and rotates counterclockwise around the origin nπ/2 radians while its amplitude strictly grows with ω. In doing so, the vector draws a curve, which crosses alternately the imaginary and the real axes as shown, e.g., in Fig. 4.2a. The intersection of the imaginary axis corresponds to a root of p e (w) whereas the intersection with the real axis corresponds to a root of p o (w). Therefore, the values of ω at which p(jω) crosses either the imaginary or the real axis form the sequence 0 < ω e,1 < ω o,1 < ω e,2 < ω o,2 <... This concludes the first part of the proof. theorem see [8] or [9]. For more details on the Hermite-Biehler Consider p e (ω) and p o (ω). These polynomials are expressed as functions of even powers of ω. Thus we can make a substitution ω 2 σ that results in polynomials p e (σ) and p o (σ). The order of these polynomials is hence decreased by two. We can now easily observe that polynomials p e (ω) and p o (ω) satisfy the interlacing property if and only if 38

39 1. Their leading coefficients are of the same sign, and 2. All the roots of p e (σ) and p o (σ) are real, distinct, and interlace in the following manner: 0 < σ e,1 < σ o,1 < σ e,2 < σ o,2 <... Example Consider the polynomial p(s) = s s s s s s s s s + 6. The respective even and odd polynomials are thus p even (s) = 11s s s s and p odd (s) = s s s s Changing to ω we have p e (ω) = 11ω 8 145ω ω 4 155ω 2 +6 and p o (ω) = ω 8 52ω ω 4 280ω and, finally, p e (σ) = 11σ 4 145σ σ 2 155σ + 6 and p o (σ) = σ 4 52σ σ 2 280σ We can relatively easyly determine the roots of p e (σ) and p o (σ), which are 1 σ e = {10.42, 2.14, 0.58, 0.04}, and σ o = {46.4, 4.25, 1.14, 0.22}. The roots are interlacing and hence, the polynomial p(s) is Hurwitz stable. 1 Rounded offto the second decimal digit. 39

40 Chapter 5 Linear systems in frequency domain 5.1 Laplace transform Consider a piecewise continuous function x(t), defined for all t [0, ) and such that x(t) = 0 for all t < 0. Let, furthermore, there exist positive constants M and a real number a such that x(t) < Me at when 0 t <. The Laplace transform of x(t), t [0, ), is an analytic function X(s), defined by (provided the integral converges absolutely): X(s) = 0 x(t)e st dt, (5.1) where s is a complex (frequency) variable, s = σ + iω, with σ R and ω R. In the following, we will write X(s) = L{f(t)}. The set of values s for which the integral (5.1) converges absolutely is a half-plane G ROC = {s C R(s) > a (R(s) a)} that is referred to as the region of convergence (ROC). In particular, if x(t) is an integrable on [0, ) function, the region of convergence is defined as G 0 = {s C R(s) 0)}. The inverse Laplace transform is given by the Fourier Mellin integral: x(t) = L 1 {X}(t) = 1 2πi lim T γ+it γ it e st X(s) ds, where γ > a is a real number so that the contour path of integration is in the region of convergence of X(s). The Laplace transform is very similar to the Fourier transform (it can be thought as an analytical extension of the FT). While the Fourier transform of a function is a complex function of a real variable (frequency), the Laplace transform of a function is 40

41 a complex function of a complex variable. Laplace transforms are usually restricted to functions of t with t > 0. A consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable s. Lemma The Laplace transform has the following properties: 1. L{αf(t) + βg(t)} = αl{f(t)} + βl{g(t)} linearity; 2. L{ d dtf(t)} = sl{f(t)} f(0) for any piecewise differentiable f(t); 3. L{ t 0 f(τ)dτ} = 1 s L{f(t)} = 1 s F (s); 4. L{f(t τ)} = e τs L{f(t)} = e τs F (s); 5. F (s) G(s) = t 0 f(τ)g(t τ)dτ convolution; 6. lim f(t) = lim sf (s) initial value theorem; t 0 + s 7. lim f(t) = lim sf (s) if all poles of sf (s) are in the left half-plane. final value t s 0 theorem; Proof. We will prove only the items 2) and 7). For the rest see [?]. d dτ f(τ)e sτ dτ = f(τ)e sτ + s f(τ)e sτ dτ = f(0) + sf (s). 0 To prove the final value theorem we take the limit of the above: lim[sf (s) f(0)] = lim s 0 s 0 By canceling f(0) on both sides we arrive at df(t) dt e st dt = f( ) = lim s 0 [sf (s)]. 0 df(t) = f( ) f(0). 5.2 Transfer matrices The Laplace transform is particularly useful for analyzing linear dynamical systems as will be illustrated below. Consider the LTI system (3.11). Assume that there is a pair (M, a) such that 1. The real parts of all eigenvalues of the system matrix A are less than a; 2. The admissible input signals u(t) satisfy u(t) Me at for all t 0. Then we can apply the Laplace transform to (3.11) to get (taking into account property 2) of Lemma 5.1.1): sx(s) = x(0) + AX(s) + BU(s) Y (s) = CX(s). 41

42 Eliminating X(s) we arrive at Y (s) = C(sI A) 1 BU(s) + C(sI A) 1 x(0). The first term is the Laplace transform of the system s output due to the input signal and the second one describes the system s reaction to the initial conditions x(0). When considering frequency domain representation of linear control systems it is common to assume that the initial conditions are equal to zero, i.e., x(0) = 0. In this case we can restrict ourselves to considering the input/output behavior of the control system. This approach has certain advantages as well as some drawbacks as we shall see below. The matrix W (s) = C(sI A) 1 B relating the Laplace transform of the input and the output signals is referred to as the transfer matrix. We can easily see that if u(t) and y(t) are scalar signals, the transfer matrix reduces to a function, which is called the transfer function. A transfer function can be alternatively defined as a ratio of the Laplace transforms of the output and the input functions, W (s) = Y (s) U(s) Properties of a transfer matrix Any entry of the transfer matrix W (s) is a rational function, given by a fraction of two polynomials in s: W ij (s) = p ij(s) q ij (s). We will say that W ij(s) is the transfer function between the j-th component of the output signal and the i-th component of the input signal. A transfer function W (s) = p(s) q(s) is said to be proper or causal if deg(p(s)) deg(q(s)) and strictly proper (strictly causal) if this inequality is strict. Informally, a system is causal if the output depends only on present and past, but not future inputs and it is strictly causal if the output depends only on past inputs. Physically realizable transfer functions. Lemma A transfer matrix is invariant w.r.t. the linear change of variables. Proof. Consider the system (3.11) and introduce the new variables z = T x, where T R n x is non-singular. Then (3.11) turns into The respective transfer matrix is thus ẋ = T AT 1 x + T Bu y = CT 1 x. W (s) = CT 1 ( si T AT 1) 1 T B = C ( st 1 IT T 1 T AT 1 T ) 1 B = C (si A) 1 B = W (s) 42

43 We recall the result of Theorem which states that any controlled LTI system can be decomposed into 4 subsystems. We are interested in the first one, the controllable on observable subsystem (A 11, B 1, C 1 ). Theorem The transfer matrix W (s) of the LTI system (3.11) coincides with the transfer matrix of the controllable and observable subsystem of (3.11). Proof. According to Lemma 5.2.1, the transfer matrix of the original system (3.11) and that of its canonical decomposition (3.19) coincide. We compute the transfer matrix of (3.19) explicitly (inverse of a block matrix!): W (s) = [ C 1 0 C 3 0 ] =C 1 (si A 11 ) 1 B 1 si A 11 0 A 13 0 A 21 si A 22 A 23 A si A A 43 si A 44 1 B 1 B Transfer functions In this Section we assume that both u(t) and y(t) are scalar signals. This allows us to concentrate on studying a single transfer function W (s) = Y (s) U(s). However, all results can be easily extended to the case of vector-valued input/output functions. We begin with giving a physical interpretation of a transfer function Physical interpretation of a transfer function Bode plot 5.4 BIBO stability 43

44 Part II CONTROL SYSTEMS: SYNTHESIS 44

45 Chapter 6 Feedback control 6.1 Introduction In the first part of this lecture notes we studied structural properties of linear control systems, many of which are related to control. However, we have not addressed the problem of control design yet (except a little digression in Sec ). Now we will close this gap. To start with, we define two fundamental classes of controls. Definition A control u(t) is said to be open-loop if it depends only on the current time t and closed-loop (feedback) if it depends both on the time t and the current state x(t), i.e., u(t) = u(t, x(t)). Open-loop controls are computed off-line and stored in a look-up table. As the time goes on, the current value of the control is computed by interpolating between the stored values. The control ū(t) discussed in Sec is an example of open-loop control. Closed-loop controls, in turn, are computed on-line according to a certain rule which is referred to as the control law. It may seem that feedback control is just a special class of open-loop control as it reduces to u(t) when considering x(t) as a function of t. However, there is a fundamental difference: in contrast to open-loop control, feedback control can alter the system s dynamics (for instance, an unstable system can be made stable using a feedback a result that cannot be achieved using open-loop control). Furthermore, feedback control systms are capable of attenuating the disturbances acting on the system. A general feedback control system is shown on Fig It works as follows: the measured output y is transformed by the feedback controller Σ F. The difference between the resulting signal γ and the reference (or external input) signal v is the discrepancy e = v γ. The latter is transformed by the feedforward controller Σ C to yield the control u which is applied to the plant Σ thus effectively closing the loop. Two remarks are in order: 45

46 Figure 6.1: General structure of a feedback control system 1. Both the feedback and the feedforward controller can be either dynamic or static. In the second case, the respective controller reduces to a matrix of appropriate dimensions. In particular, any of these controllers can omitted by setting it to an identity matrix. 2. The minus sign in the expression for the error signal reflects the common convention to consider the negative feedback. However, the coefficients (gains) of the feedback controller can be either positive or negative depending on the specific form of the control system Σ. Note that there can be many variations of the above described scheme depending on the problem. For instance, a feedback control system can be designed to serve as a part of another, more complex system. Thus, it is in general impossible to present a uniform characterization of the whole class of such systems. Instead, we shall concentrate on two particular cases Reference tracking control Consider the scheme in Fig Here we directly prescribe how the system should behave by specifying the desired system s output ŷ(t). The difference between the desired and the measured outputs, e(t) = ŷ(t) y(t), is called the error. The error signal serves as the input for the controller Σ C which returns the control signal u. The control influences the system in order to minimize the absolute value of e(t) thus ensuring convergence of the actual output to the desired one. The reference signal ŷ(t) can be a constant. In this case we speak of set-point control. Figure 6.2: A reference tracking control system 46

47 6.1.2 Feedback transformation The scheme shown in Fig. 6.3 demonstrates another approach to the use of feedback. The idea consists in regarding the control u as a sum of two components, u(t) = v(t) Σ(y( )), where v(t) is the new control and Σ(y( )) is a correction term that is used to modify the dynamics of the closed-loop system. In general, Σ F ( ) can be a dynamical system, that is Σ(y( )) can depend on the previous values of the system s output y(t). However, in this notes we shall restrict ourselves to the static case. Figure 6.3: A feedback transformation scheme Consider our good old linear system { ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t), (6.1) Let the control u(t) be defined as u(t) = v(t) F y(t), where F is the feedback (or feedback gain) matrix of appropriate dimensions. With this control, (6.1) turns into { ẋ(t) = (A BF C)x(t) + Bv(t), (6.2) y(t) = Cx(t). Now, the system s dynamics is determined by a new matrix à = (A BF C). Note that this has become possible because we (indirectly, through y(t)) fed the state back to the system s input. We already know that the matrix A (precisely, its eigenvalues) determines stability of the system. Thus one may ask: (Q1). Can one choose the matrix F to render an unstable system (6.1) stable? It turns out that this is possible under certain conditions as will be shown in the next Section. One might go further and ask (Q2). Can one choose the matrix F to render an uncontrollable (unobservable) system (6.1) controllable (resp. observable)? This answer to this question turns out to be negative as the following example demonstrates. Example To check controllability of the closed-loop system (6.2) we employ the Hautus criterion. First we note that ( ) ( ) ( I 0 si A + BF C, B = si A, B F C I 47 ).

48 The last matrix is obviously non-singular, i.e., its rank is equal to (n + m). Using the inequality rank(ab) min(rank(a), rank(b)), we conclude that rank ( si A + BF C, B ) rank ( si A, B ) n. Hence, an uncontrollable system cannot be made controllable by a static feedback. The same result can be obtained for the observability property. Note that the preceding analysis was performed independently on the choice of the new control v(t), which can be chosen using any available method. We conclude this section by presenting a way to relate two simingly different approaches described above. Consider the scheme shown in Fig Let v(t) = 0, that is we set the new control v(t) to zero. Instad of this, we introduce the reference signal ŷ(t) by subtracting it from the measured output y(t) (see Fig. 6.4). That is to say, we trick the feedback controller by feeding it with a biased signal y(t) ŷ(t) instead of y(t). Figure 6.4: A modification of the feedback transformation scheme The control signal is thus u(t) = F (y(t) ŷ(t)) 1 and the closed-loop system is { ẋ(t) = (A BF C)x(t) + BF ŷ(t), y(t) = Cx(t), (6.3) ( ) ( ) ( ) I 0 si A + BF C, BF = si A, B. (6.4) F C F For any nontrivial feedback matrix F the last matrix in (6.4) has rank bigger than n. Noting that rank ( si A, B ) n we conclude that the controllability property of the modified system (6.3) depends on controllability of the original system (6.1) while the dynamics of the new system can be modified using proper choice of F. Finally, by rearranging the components of the scheme in Fig. 6.4 we see that it turns into the reference tracking feedback control system as in Fig Note the sign! 48

49 6.2 Pole placement procedure Consider the following basic problem: is it always possible to chose such full-state negative feedback u = F x that the closed-loop system ẋ = (A BF )x possesses some desired properties (i.e., is stable, has particular root locus etc.)? The following theorem gives a (partially) positive answer to this question. Theorem For a pair (A, B), a matrix F R m n can be chosen such that the eigenvalues of (A BF ) are arbitrarily assigned if and only if the pair(a, B) is completely controllable. Proof. ( ): Let the pair (A, B) be not completely controllable. This implies that there exists a similarity transformation T such that the system s matrices get the form (3.14). Applying T to (A BF ) we get [ ] T 1 A11 B (A BF )T = 1 F 1 A 12 B 1 F 2. 0 A 22 The eigenvalues of the matrix A 22 can not be inflluenced by any choice of F and hence can not be assigned. ( ): TBA... Let us now state formally the pole placement problem. Problem. Given a pair (A, B), let Λ d = {λ 1,..., λ n } be the desired spectrum of the closed loop system such that λ i Λ d C λ i Λ d. Determine F R m n such that Λ(A BF ) = Λ d. Note that the condition on λ i implies that the complex eigenvalues enter the desired spectrum only in conjugate pairs. This follows from the fact that (A BF ) is a realvalued matrix. The chosen set of eigenvalues uniquely determines the respective characteristic polynomial p d (s) = n i=1 (s λ i). Thus the first, direct approach would be to write the characteristic polynomial of (A BF ), equate it to p d (s) and use the method of undetermined coefficients to compute the components of the feedback matrix F = {f ij }, i = 1,..., m, j = 1,..., n. In general, such approach requires solving a system of n algebraic equations w.r.t. mn variables f ij. This is not only a tedious task, but may also lead to a badly posed numerical problem. Therefore, it is in general not advesed to use this approach except for the low-dimensional cases (n, m 3). An alternative approach is based upon transforming the matrices (A, B) to the Frobenius normal form as described in Sec. A.2.1. We present this approach in the form of an algorithm. 49

50 Step 1. Compute the transformation martrix P such that P AP 1 = A F and P B = [ ] T. Step 2. Perform the linear transformation of coordinates z = P x. In the new coordinates the closed-loop system ẋ = (A BF )x takes the form ż = [ ] f1 f2... fn 0 z a n a n 1... a = z a n a n 1... a 1 f 1 f2... fn Step 3. Let a d i, i = 1,..., n do the coefficients of the desired polynomial p d(s). Then we the coefficients of F such that the following relations hold: a n i f 1+i = a d n i for i = 0,..., n 1. Thus, we have f 1+i = a d n i a n i, i = 0,..., n 1. Step 4. Coefficients f i were computed for z, now we need to perform the inverse transformation to return to our initial state x: F = F P. 6.3 Linear-quadratic regulator (LQR) Before proceeding to the LQR, we will make a short excursus to optimal control theory Optimal control basics When determining the feedback control law it is often desirable to find a control law that is optimal in some sense. To make the idea of optimality more concrete we need to define a criterion that measures the performance (or quality, productivity, etc.) of the control system. The control law which minimizes this criterion while satisfying some additional constraints is said to be optimal. A typical control problem consists of the following ingredients: 1. Time interval [, T ), where the terminal time T > can be either finite or infinite. One can also consider T to be a decision variable. In this case we speak of free time optimal control problem. 50

51 2. Cost functional is defined as an integral J(x, u) = T f 0 (x(τ), u(τ), τ)dτ + F (x(t )), (6.5) where f 0 is a sufficiently regular function that may be required to satisfy some additional restrictions (convexity, non-negativeness etc.) and F (x(t )) is the terminal cost which is to be evaluated at the final point x(t ). The terminal cost can be used to describe soft restrictions on the terminal point. Say, we can set F (x) = (x x) 2 thus requiring the terminal state x(t ) to be as close as possible to x. If we considered a free time optimal control problem, J in (6.5) would also depend on T, i.e., we had J(x, u, T ). 3. State and control constraints. For instance, one may require that the control signal does not exceed u max in absolute value, i.e., u max u u max. Similar constraints can be imposed on the state x. In general, this can be written as (u, x) Ω(x, u), where the set of admissible control and state values may depend on x and u (mixed constraints). 4. Initial and final conditions. It is common to consider the initial state x( ) (as well as the initial time ) to be fixed while imposing different constraints on the terminal state x(t ). It can be left unspecified (free end point), be restricted to a certain terminal manifold, x(t ) T (variable end point), or be fixed (fixed end point). Furthermore, there are two types of optimal control: 1. Open loop. In this case, the optimal control u o depends only on time and the initial position x 0 2, denoted by u o (t, x 0 ). Given an initial state x 0, the open loop optimal control is computed off-line and stored in the form of a look-up table. One obvious drawback of the open loop optimal control is that it does not take into account any deviations from the prescribed (optimal) trajectory that may result from disturbances acting on the system, model uncertainties etc. 2. Closed loop (feedback). A closed loop optimal control is a function of the current state and (possibly) of the time, denoted by u (t, x). The feedback optimal control is computed on-line according to the optimal feedback law 3. Note that the open loop optimal control can be seen as a particular case of feedback optimal control, which is computed for the trajectory starting at t = 0 from x 0 : u o (t, x 0 ) = u (t, x (0, x 0 ; t)), where x (0, x 0 ; t) is the trajectory corresponding to the optimal control u and starting from x(0) = x 0. There are two specific techniques that can be used when computing optimal controls: Pontryagin s Maximum Principle for computing open loop optimal control and Bell- 2 The latter dependence is often dropped as x 0 is assumed to be fixed. One thus writes u o (t) 3 Which is obtained as a solution to some partial differential equation as will be elaborated below. Note that this solution may be either analytic or numerical. In the latter case, one would need to store the parameters of the optimal control law in a form of a look-up table. 51

52 man s Dynamic Programming for computing feedback optimal control. In this notes we will consider the latter. For more details on the Pontryagin Maximum Principle see, e.g., [6, 4] as well as the original treatment of the subject by Pontryagin, Boltyanski, Gamkrelidze and Mischenko in [7] Dynamic programming We start off by considering the following (sufficiently general) optimal control problem. Let the system s dynamics be given by ẋ(t) = f(x(t), u(t), t), x( ) = x 0, (6.6) where f : X U [, ) R n is a sufficiently smooth vector valued function, X R n is the set of admissible states, U is the set of admissible control values. The cost functional to be minimized 4 is J(x, u) = T f 0 (x(τ), u(τ))dτ + F (x(t )). (6.7) Furthermore, we drop a hard constraint on the terminal state by letting x(t ) to be freely chosen and retain only the soft one expressed by the terminal cost function F (x(t )). The solution of the described optimal control problem relies on the following general principle that was formulated by R. Bellman and is thus referred to as Bellman s Optimality Principle, [2]: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. Let W (s, x s ) be the minimal value of the cost functional (6.7) when starting at time s from the state x(s) = x s : T W (s, x s ) = min u(τ) f 0 (x(τ), u(τ))dτ + F (x(t )). (6.8) τ [s,t ) s Note that W (s, x s ) is defined for any admissible pair (s, x s ). Obviously, the solution x(τ) in (6.8) is understood to satisfy (6.6) with x(s) = x s. Let σ > 0 be a sufficiently small parameter. Using the additivity property of the integral we can rewrite (6.8) as s+σ T W (s, x s ) = min u(t) t [s,s+σ) s f 0 (x(τ), u(τ))dτ + min u(t) t [s+σ,t ) s+σ f 0 (x(τ), u(τ))dτ + F (x(t )). (6.9) 4 The maximization problem can be addressed in the same way with a couple of obvious modifications. 52

53 We observe that the last two terms in (6.9) are W (s + σ, x s+σ ): W (s, x s ) = min u(t) t [s,s+σ) s+σ s f 0 (x(τ), u(τ))dτ + W (s + σ, x s+σ ). (6.10) We note that W (s + σ, x s+σ ) depends on u(τ), τ [s, s + σ) as the control u(t) over this small interval influences the value of x(t) at t = s + σ. But W (s, x s ), in turn, does not depend on u as it was already maximized over the whole interval [s, T ). This we can bring W (s, x s ) into the square brackets. Next, dividing the resulting expression by σ and passing to the limit (σ 0) we get: 0 = min [f 0 (x(s), u(s)) + ddt W (t, x t) ] u(s) U t=s, (6.11) which is referred to as the Hamilton-Jacobi-Bellman (HJB) equation. Theorem Let W (t, x) be a continuously differentiable function such that [ 0 = min f 0 (x, u) + x W (t, x), f(x, u) + u U ] W (t, x), (6.12) t holds for any admissible pair (t, x) and W (t, x) satisfies the boundary condition W (T, x) = F (x), x X. Let there also be a piece-wise continuous in t function u (t, x) that minimizes the expression in square brackets in (6.12) for all t and x. Then W (t, x) is the solution of the HJB equation and the function u (t, x) is the optimal feedback control law. Hence, the algorithm for determining the optimal feedback control can be described as follows: Step 1. Write the HJB equation (6.12). Step 2. Determine the minimizing control u as a function of t, x, and x W (t, x): u = ū(t, x, x W (t, x)). Step 3. Substitute ū(t, x, x W (t, x)) to (6.12) to get a first order PDE: 0 = f 0 (x, ū(t, x, x W )) + x W, f(x, ū(t, x, x W )) + W (t, x). (6.13) t Step 4. Find a continuously differentiable solution to (6.23) satisfying the boundary contition W (T, x) = F (x) (or W (T, x) = 0 if there is no terminal cost). Step 5. Compute x W (t, x) and substitute the result to ū(t, x, x W (t, x)) to get the feedback optimal control u (t, x) = ū(t, x, x W (t, x)). 53

54 Note that the main pitfall is that the PDE (6.23) does not always possess a continuously differentaible solution. In most cases it is impossible to decide whether such a solution exists. It is therefore common to drop the requirement of W (t, x) being continuously differentiable and to consider solutions of (6.23) in some weak sense. This leads to the so called viscous and maximin solutions. An alternative approach is to discretize (6.12) and solve it numerically. This results in a sufficiently smooth approximation of the Bellman function along with its first order derivatives that are then used to compute the feedback control law. However, we will not dwell upon these topics and pass on to the linear case Linear-quadratic optimal control problem Finally, we have arrived at our LQR problem. It is defined as follows: Find u (t, x) such that T J(t, x, u ) = min t [ x T (τ)qx(τ) + u T (τ)ru(τ) ] dτ, (6.14) where Q and R are positive-definite symmetric matrices of appropriate dimensions and x(t) evolves according to ẋ = Ax + Bu. (6.15) The HJB equation for the problem (6.21), (6.15) is 0 = min u U [ x T Qx + u T Ru + x W (t, x), Ax + Bu + Differentiating the expression in the square brackets w.r.t. u we get 5 2u T R + x W (t, x)b, whence, using the first-order extremality condition, ] W (t, x). (6.16) t u = 1 2 R 1 B T T x W (t, x). (6.17) One can easily check that u does indeed minimizes that expression as the respective Hessian is equal to R which is positive definite. Thus the expression to be minimized is convex and the minimum can be achieved. Remark. Note, however, that a little caution is necessary because the obtained optimal control can happen to be outside the set of admissible control values U! Substitute u to (6.16) to get W (t, x) t = x T Qx 1 4 xw (t, x)br 1 B T T x W (t, x) + x W (t, x)ax. (6.18) The PDE (6.18) is still rather difficult to deal with. However, it is known that the Bellman function W (t, x) for the linear-quadratic optimization problems has a particularly simple structure. 5 Here and later on we assume that the gradient is a row vector. 54

55 Theorem The Bellman function in the linear-quadratic optimization problem (6.21), (6.15) is a quadratic form W (t, x) = x T (t)p (t)x(t), where P (t) is a time-dependent, positive-definite symmetric matrix such that P (T ) = 0. Now we can rewrite (6.18) taking into account that x W (t, x) = 2x T P (t): x T P (t)x = x T Qx x T P (t)br 1 B T P (t)x + x T P (t)ax + x T AP (t)x, (6.19) which yields the following matrix differential equation, called the differential Riccati equation (DRE): P (t) = P (t)br 1 B T P (t) P (t)a AP (t) Q, P (T ) = 0. (6.20) Note that this equation is to be solved backwards, which is in accordance with the Bellman optimality principle. The above described result can be further simplified using the following intuitive result. Theorem If the LQ optimization problem (6.21), (6.15) is defined on a semiinfinite interval with T =, the Bellman function does not depend on t and can be written as W (x) = x T (t)p x(t), where P is a positive-definite symmetric matrix. That is to say, if we aim at minimizing the cost functional J(t, x, u ) = min t [ x T (τ)qx(τ) + u T (τ)ru(τ) ] dτ, (6.21) the problem reduces to solving an algebraic Riccati equation (ARE) 0 = P BR 1 B T P P A AP Q. (6.22) The optimal feedback control is thus a linear function of the system s state: u (x) = 1 2 R 1 B T P x (6.23) and K = 1 2 R 1 B T P is said to be the optimal feedback gain matrix. Remark. Note that an ARE has two solutions: a positive-definite and a negative-definite one. More discussion needed... 55

56 Chapter 7 State observers 7.1 Full state observer For many practical problems, the full state vector of the system cannot be determined by observing the system s output even though the system is observable. However, it is often desirable to know the state of the system. Such information can be used to design a linear-quadratic regulator (Sec. 6.3) or to use the pole placement procedure (Sec. 6.2), just to mention a few. It turns out that if a system is observable, it is possible to fully reconstruct the system state from its output measurements using an auxiliary system which is called the state observer. The idea, first proposed by David Luenberger in his doctoral thesis in 1962 is quite simple. Since the structure of the system is known we can easily implement a numerical model of the system. Applying to the model the same input signal that is applied to the physical system we get the output ŷ(t) which differs from the actual output y(t), the deviation being determined by the difference in the initial states of the system and the model. The difference y ŷ can be used to compute a correction to the input signal applied to the model as shown in Fig Figure 7.1: A state observer. 56

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0). Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Some solutions of the written exam of January 27th, 2014

Some solutions of the written exam of January 27th, 2014 TEORIA DEI SISTEMI Systems Theory) Prof. C. Manes, Prof. A. Germani Some solutions of the written exam of January 7th, 0 Problem. Consider a feedback control system with unit feedback gain, with the following

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

On Controllability of Linear Systems 1

On Controllability of Linear Systems 1 On Controllability of Linear Systems 1 M.T.Nair Department of Mathematics, IIT Madras Abstract In this article we discuss some issues related to the observability and controllability of linear systems.

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)

More information

Robust Control 2 Controllability, Observability & Transfer Functions

Robust Control 2 Controllability, Observability & Transfer Functions Robust Control 2 Controllability, Observability & Transfer Functions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University /26/24 Outline Reachable Controllability Distinguishable

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky Equilibrium points and linearization Eigenvalue decomposition and modal form State transition matrix and matrix exponential Stability ELEC 3035 (Part

More information

Control, Stabilization and Numerics for Partial Differential Equations

Control, Stabilization and Numerics for Partial Differential Equations Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability

More information

Module 03 Linear Systems Theory: Necessary Background

Module 03 Linear Systems Theory: Necessary Background Module 03 Linear Systems Theory: Necessary Background Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Problem 2 (Gaussian Elimination, Fundamental Spaces, Least Squares, Minimum Norm) Consider the following linear algebraic system of equations:

Problem 2 (Gaussian Elimination, Fundamental Spaces, Least Squares, Minimum Norm) Consider the following linear algebraic system of equations: EEE58 Exam, Fall 6 AA Rodriguez Rules: Closed notes/books, No calculators permitted, open minds GWC 35, 965-37 Problem (Dynamic Augmentation: State Space Representation) Consider a dynamical system consisting

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u) Observability Dynamic Systems Lecture 2 Observability Continuous time model: Discrete time model: ẋ(t) = f (x(t), u(t)), y(t) = h(x(t), u(t)) x(t + 1) = f (x(t), u(t)), y(t) = h(x(t)) Reglerteknik, ISY,

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft Control Systems Design, SC4026 SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft Lecture 4 Controllability (a.k.a. Reachability) vs Observability Algebraic Tests (Kalman rank condition & Hautus test) A few

More information

Solution via Laplace transform and matrix exponential

Solution via Laplace transform and matrix exponential EE263 Autumn 2015 S. Boyd and S. Lall Solution via Laplace transform and matrix exponential Laplace transform solving ẋ = Ax via Laplace transform state transition matrix matrix exponential qualitative

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Control Systems Design, SC4026 SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Lecture 4 Controllability (a.k.a. Reachability) and Observability Algebraic Tests (Kalman rank condition & Hautus test) A few

More information

Solution for Homework 5

Solution for Homework 5 Solution for Homework 5 ME243A/ECE23A Fall 27 Exercise 1 The computation of the reachable subspace in continuous time can be handled easily introducing the concepts of inner product, orthogonal complement

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Autonomous system = system without inputs

Autonomous system = system without inputs Autonomous system = system without inputs State space representation B(A,C) = {y there is x, such that σx = Ax, y = Cx } x is the state, n := dim(x) is the state dimension, y is the output Polynomial representation

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Nonlinear Control Lecture 5: Stability Analysis II

Nonlinear Control Lecture 5: Stability Analysis II Nonlinear Control Lecture 5: Stability Analysis II Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 Farzaneh Abdollahi Nonlinear Control Lecture 5 1/41

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

CDS Solutions to Final Exam

CDS Solutions to Final Exam CDS 22 - Solutions to Final Exam Instructor: Danielle C Tarraf Fall 27 Problem (a) We will compute the H 2 norm of G using state-space methods (see Section 26 in DFT) We begin by finding a minimal state-space

More information

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77 1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),

More information

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation:

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation: SYSTEMTEORI - ÖVNING 1 GIANANTONIO BORTOLIN AND RYOZO NAGAMUNE In this exercise, we will learn how to solve the following linear differential equation: 01 ẋt Atxt, xt 0 x 0, xt R n, At R n n The equation

More information

Lecture Notes for Math 524

Lecture Notes for Math 524 Lecture Notes for Math 524 Dr Michael Y Li October 19, 2009 These notes are based on the lecture notes of Professor James S Muldowney, the books of Hale, Copple, Coddington and Levinson, and Perko They

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

CDS Solutions to the Midterm Exam

CDS Solutions to the Midterm Exam CDS 22 - Solutions to the Midterm Exam Instructor: Danielle C. Tarraf November 6, 27 Problem (a) Recall that the H norm of a transfer function is time-delay invariant. Hence: ( ) Ĝ(s) = s + a = sup /2

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008 ECE504: Lecture 8 D. Richard Brown III Worcester Polytechnic Institute 28-Oct-2008 Worcester Polytechnic Institute D. Richard Brown III 28-Oct-2008 1 / 30 Lecture 8 Major Topics ECE504: Lecture 8 We are

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.4: Dynamic Systems Spring Homework Solutions Exercise 3. a) We are given the single input LTI system: [

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology Grammians Matthew M. Peet Illinois Institute of Technology Lecture 2: Grammians Lyapunov Equations Proposition 1. Suppose A is Hurwitz and Q is a square matrix. Then X = e AT s Qe As ds is the unique solution

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Module 07 Controllability and Controller Design of Dynamical LTI Systems Module 07 Controllability and Controller Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha October

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Control Systems. Internal Stability - LTI systems. L. Lanari

Control Systems. Internal Stability - LTI systems. L. Lanari Control Systems Internal Stability - LTI systems L. Lanari outline LTI systems: definitions conditions South stability criterion equilibrium points Nonlinear systems: equilibrium points examples stable

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Control Systems Design

Control Systems Design ELEC4410 Control Systems Design Lecture 18: State Feedback Tracking and State Estimation Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 18:

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Bohl exponent for time-varying linear differential-algebraic equations

Bohl exponent for time-varying linear differential-algebraic equations Bohl exponent for time-varying linear differential-algebraic equations Thomas Berger Institute of Mathematics, Ilmenau University of Technology, Weimarer Straße 25, 98693 Ilmenau, Germany, thomas.berger@tu-ilmenau.de.

More information

Linear State Feedback Controller Design

Linear State Feedback Controller Design Assignment For EE5101 - Linear Systems Sem I AY2010/2011 Linear State Feedback Controller Design Phang Swee King A0033585A Email: king@nus.edu.sg NGS/ECE Dept. Faculty of Engineering National University

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

A brief introduction to ordinary differential equations

A brief introduction to ordinary differential equations Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)

More information

Intro. Computer Control Systems: F8

Intro. Computer Control Systems: F8 Intro. Computer Control Systems: F8 Properties of state-space descriptions and feedback Dave Zachariah Dept. Information Technology, Div. Systems and Control 1 / 22 dave.zachariah@it.uu.se F7: Quiz! 2

More information

João P. Hespanha. January 16, 2009

João P. Hespanha. January 16, 2009 LINEAR SYSTEMS THEORY João P. Hespanha January 16, 2009 Disclaimer: This is a draft and probably contains a few typos. Comments and information about typos are welcome. Please contact the author at hespanha@ece.ucsb.edu.

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Supplementary chapters

Supplementary chapters The Essentials of Linear State-Space Systems Supplementary chapters J. Dwight Aplevich This document is copyright 26 2 J. D. Aplevich, and supplements the book The Ussentials of Linear State-Space Systems,

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information