ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

Size: px
Start display at page:

Download "ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77"

Transcription

1 1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations

2 Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t), (1a) (1b) the solutions, x(t) and y(t), of the state equation are and t x(t) = e At x(0) + e A(t τ) Bu(τ)dτ, (2) 0 y(t) = C e At x(0) + C t 0 e A(t τ) Bu(τ)dτ + D u(t). 2/77

3 State Transition Matrix In the expression of the solution x(t) of the state equation, the term e At is called state transition matrix and is commonly notated as Φ(t) := e At. (3) Let us examine some properties of the state transition matrix that will be used in later chapters. Assume zero input, u(t) = 0. Then the solution of the system becomes ẋ(t) = Ax(t) (4) x(t) = e At x(0) = Φ(t)x(0). (5) At t = 0, we know Φ(0) = I. Differentiating eq. (5), we have ẋ(t) = Φ(t)x(0) = Ax(t). (6) 3/77

4 State Transition Matrix (cont.) At t = 0, ẋ(0) = Φ(0)x(0) = Ax(0) (7) leads us to have Φ(0) = A. (8) Therefore, the state transition matrix Φ(t) has the following properties: Φ(0) = I and Φ(0) = A. (9) 4/77

5 State Transition Matrix (cont.) In the following, we summarize the property of the state transition matrix. (a) Φ(0) = e A0 = I (b) Φ(t) = e At = ( e At) 1 = Φ 1 ( t) (c) Φ 1 (t) = Φ( t) (d) Φ(a + b) = e A(a+b) = e Aa e Ab = Φ(a)Φ(b) (e) Φ n (t) = ( e At) n = e Ant = Φ(nt) (f) Φ(a b)φ(b c) = Φ(a)Φ( b)φ(b)φ( c) = Φ(a)Φ( c) = Φ(a c) 5/77

6 State Transition Matrix (cont.) If A = λ 1 λ 2... λn, e At = e λ1t e λ2t.... e λnt 6/77

7 State Transition Matrix (cont.) If e At = A = λ λ λ λ µ 1 0 µ, e λt te λt 1 2 t2 e λt 1 6 t3 e λt 0 e λt te λt 1 2 t2 e λt 0 0 e λt te λt e λt e µt te µ 0 e µt The state transition matrix contains all the information about the free motions of the system in eq. (4). However, computing the state transition matrix is in general not easy. To develop some approaches to computation, we first study functions of a square matrix., 7/77

8 Functions of a Square Matrix Consider a state space model: ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t). Taking the Laplace transform, we have sx (s) x(0) = AX (s) + BU(s) Y (s) = CX (s) + DU(s) and Y (s) = = C [ (si A) 1 BU(s) + (si A) 1 x(0) ] + DU(s) C(sI A) 1 B + D ] U(s) + C(sI A) 1 x(0). 8/77

9 Functions of a Square Matrix (cont.) The transfer function G(s) is obtained by letting the initial condition x(0) = 0. Y u (s) = [ C(sI A) 1 B + D ] U(s). (10) }{{} G(s) This shows the relationship between a state space representation of the system and its corresponding transfer function. The following example illustrates how to compute the transfer function from the given state space representation of the system. 9/77

10 Functions of a Square Matrix (cont.) Example Consider a 3rd order system with 2 inputs and outputs ẋ(t) = x(t) y(t) = [ ] x(t) + [ u(t) ] u(t) 10/77

11 Functions of a Square Matrix (cont.) Example (cont.) To compute the transfer function, (si A) 1 = = s s 2 + 5s + 6 s 2 + 5s s + 1 s 2 + 5s + 6 s 2 + 5s s s s s s s s 2 + 5s s 2 + 5s s s s s s s s s s /77

12 Functions of a Square Matrix (cont.) Example (cont.) Therefore, the state transition matrix is 2e 2t e 3t 2e 2t 2e 3t e t 2e 2t + e 3t e At = e 2t + e 3t e 2t + 2e 3t e 2t + e 3t 0 0 e t. The transfer function is G(s) = C(sI A) 1 B+D = 2 s s s s s s s /77

13 Functions of a Square Matrix (cont.) Example (cont.) To solve for x(t) with x(0) = and u(t) = [ U(t) 0 ], we have X (s) = (si A) 1 x(0) + (si A) 1 BU(s) 1 s + 1 = 2 s s s s s 1 s s s 1 s s 3 s /77

14 Functions of a Square Matrix (cont.) Example (cont.) so that x(t) = = e t + 1 e t 2e 2t + 2e 3t e 2t 2 3 e 3t e t + 1 e t e 2t e 3t 1. (11) 14/77

15 Leverrier s Algorithm This algorithm determines (si A) 1 without symbolic calculation. First write (si A) 1 = Adj[sI A] det[si A] = T (s) a(s) (12) where a(s) = det[si A] = s n + a n 1 s n a 1 s + a 0 T (s) = Adj[sI A] = T n 1 s n 1 + T n 2 s n T 1 s + T 0, T i R n n. Thus, (si A) 1 is determined if the matrices T i and the coefficients a j are found. Leverrier s algorithm does that as follows. Let a(s) = (a λ 1 )(s λ 2 ) (s λ n ) (13) 15/77

16 Leverrier s Algorithm (cont.) where the λ i are complex numbers and possibly repeated. Then it is easily seen that a (s) := da(s) ds = (s λ 2 ) (s λ n ) + (s λ 1 )(s λ 3 ) (s λ n ) + This leads to the following result. Lemma (Leverrier I) To proceed, we note that +(s λ 1 ) (s λ n 1 ). (14) a (s) a(s) = (15) s λ 1 s λ 2 s λ n (si A)(sI A) 1 = I (16) 16/77

17 Leverrier s Algorithm (cont.) or si (si A) 1 A(sI A) 1 = I (17) and taking the trace of both sides, we have strace [ (si A) 1] ( ) AT (s) Trace = n (18) a(s) or strace [ (si A) 1] ( AT0 + AT 1 s + + AT n 1 s n 1 ) Trace = n. a(s) (19) Lemma (Leverrier II) Trace(sI A) 1 = a (s) a(s) (20) 17/77

18 Leverrier s Algorithm (cont.) Proof If J denotes the Jordan form of A λ λ 1 J = λ λ 2... (21) and for some T. A = T 1 JT (22) 18/77

19 Leverrier s Algorithm (cont.) Proof (cont.) Thus, and (si A) 1 = T (si J) 1 T 1 (23) Trace(sI A) 1 = Trace ( T (si J) 1 T 1) = Trace ( TT 1 (si J) 1) = Trace(sI J) 1 1 = s λ 1 s λ 2 s λ n = a (s) (by Lemma (LeverrierI)). (24) a(s) 19/77

20 Leverrier s Algorithm (cont.) Proof (cont.) Then (19) becomes ( a ) ( (s) AT0 + AT 1 s + + AT n 1 s n 1 ) s Trace = n (25) a(s) a(s) or sa (s) Trace(AT 0 ) strace(at 1 ) s n 1 Trace(AT n 1 ) = na(s). (26) 20/77

21 Leverrier s Algorithm (cont.) Proof (cont.) Now note that sa (s) = na(s) a n 1 s n 1 2a n 2 s n 2 (n 2)a 2 s 2 (n 1)a 1 s na 0 (27) so that (26) reduces to Trace(AT 0 ) strace(at 1 ) s 2 Trace(AT 2 ) s n 1 Trace(AT n 1 ) = a 0 n + sa 1 (n 1) + s 2 a 2 (n 2) + + s n 1 a n 1. (28) 21/77

22 Leverrier s Algorithm (cont.) Proof (cont.) Equating coefficients in (28) we get a 0 = 1 n Trace(AT 0) a 1 = 1 n 1 Trace(AT 1). a k = 1 n k Trace(AT k) (29). a n 1 = Trace(AT n 1 ). 22/77

23 Leverrier s Algorithm (cont.) Proof (cont.) Now from (si A)(sI A) 1 = I we obtain a(s)i = (si A) ( T 0 + T 1 s + + T n 1 s n 1) = a 0 I + a 1 Is + + a n 1 Is + Is n or AT 0 + (T 0 AT 1 )s + (T 1 AT 2 )s (T n 2 AT n 1 )s n 1 + T n 1 s n = a 0 I + a 1 Is + a 2 Is a n 1 Is n 1 + Is n. 23/77

24 Leverrier s Algorithm (cont.) Proof (cont.) Equating the matrix coefficients, we get T n 1 = I T n 2 = AT n 1 + a n 1 I T n 3 = AT n 2 + a n 2 I. (30) T 0 = AT 1 + a 1 I 0 = AT 0 + a 0 I. 24/77

25 Leverrier s Algorithm (cont.) Proof (cont.) The relations (29) and (30) suggest the following sequence of calculations. T n 1 = I, a n 1 = Trace(AT n 1 ) T n 2 = AT n 1 + a n 1 I, T n 3 = AT n 2 + a n 2 I, a n 2 = 1 2 Trace(AT n 2) a n 3 = 1 3 Trace(AT n 3).. (31) T 1 = AT 2 + a 2 I, a 1 = 1 n 1 Trace(AT 1) T 0 = AT 1 + a 1 I, a 0 = 1 n Trace(AT 0) The computation sequence above constitutes Leverrier s algorithm. 25/77

26 Leverrier s Algorithm (cont.) Example Let A = (si A) 1 = T 3s 3 + T 2 s 2 + T 1 s + T 0 s 4 + a 3 s 3 + a 2 s 2 + a 1 s + a 0 26/77

27 Leverrier s Algorithm (cont.) Example T 2 = AT 3 + a 3 I = A 4I = T 1 = AT 2 + a 2 I = T 0 = AT 1 + a 1 = T 3 = I a 3 = Trace[A] = 4 a 2 = 1 2 Trace [AT 2] = 2 a 1 = 1 3 Trace [AT 1] = 5 a 0 = 1 4 Trace [AT 0] = 2 27/77

28 Leverrier s Algorithm (cont.) Example Therefore, det[si A] = s 4 4s 3 + 2s 2 + 5s Adj[sI A] = Is s s /77

29 Leverrier s Algorithm (cont.) Example and (si A) 1 = 1 s 4 4s 3 + 2s 2 + 5s + 2 s 3 2s 2 s s 2 + 4s + 2 s 2 2s 2 3s 2 s + 1 s 3 3s s 2 2s 2 s 4 s 2 + 2s 1 s 2 7 s 3 3s s 2 5s + 4 s 2 3s 2 s 2 3s + 4 s 2 s 2 s 3 4s 2 + 5s 2 29/77

30 Cayley-Hamilton Theorem An eigenvalue of the n n real matrix is a real or complex number λ such that Ax = λx with a nonzero vector x. Any nonzero vector x satisfying Ax = λx is called an eigenvector of A associated with eigenvalue λ. To find the eigenvalues of A, we solve (A λi )x = 0 (32) In order for eq. (32) to have a nonzero solution x, the matrix (A si ) must be singular. Thus, the eigenvalues of A are just the solutions of the following n th order polynomial equation: (s) = det(si A) = 0. (33) (s) is called the characteristic polynomial of A. 30/77

31 Cayley-Hamilton Theorem (cont.) Theorem (Cayley-Hamilton Theorem) Let A be a n n matrix and let (s) = det(si A) = s n + a n 1 s n a 1 s + a 0 be the characteristic polynomial of A. Then (A) = A n + a n 1 A n a 1 A + a 0 I = 0. (34) 31/77

32 Cayley-Hamilton Theorem (cont.) Proof (si A) 1 can always be written as (si A) 1 = 1 ( Rn 1 s n 1 + R n 2 s n 2 ) + + R 1 s + R 0 (s) where (s) = det(si A) = s n + a n 1 s n a 1 s + a 0 and R i, i = 0, 1,, n 1 are constant matrices formed from the adjoint of (si A). 32/77

33 Cayley-Hamilton Theorem (cont.) Proof (cont.) Write (s)i = (si A) ( R n 1 s n 1 + R n 2 s n R 1 s + R 0 ) = (si A)R n 1 s n 1 + (si A)R n 2 s n (si A)R 1 s + (si A)R 0 = R n 1 s n + (R n 2 AR n 1 ) s n 1 + (R n 3 AR n 2 ) s n (R 0 AR 1 ) s AR 0. Since (s)i = Is n + a n 1 Is n a 1 Is + a 0 I, 33/77

34 Cayley-Hamilton Theorem (cont.) Proof (cont.) we have by matching the coefficients, R n 1 = I R n 2 = AR n 1 + a n 1 I R n 3 = AR n 2 + a n 2 I. R 0 = AR 1 + a 1 I 0 = AR 0 + a 0 I. 34/77

35 Cayley-Hamilton Theorem (cont.) Proof (cont.) Substituting R i s successively from the bottom, we have 0 = AR 0 + a 0 I = A 2 R 1 + a 1 A + a 0 I = A 3 R 2 + a 2 A 2 + a 1 A + a 0 I. = A n + a n 1 A n 1 + a n 2 A n a 1 A + a 0 I = (A). Therefore, (A) = 0. 35/77

36 Cayley-Hamilton Theorem The Cayley-Hamilton theorem implies that a square matrix A n can be expressed as a linear combination of I, A, A 2,, A n 1. Furthermore, multiplying A to (34), we have A n+1 + a n 1 A n + a n 2 A n a 1 A 2 + a 0 A = 0 which implies that A n+1 can also be expressed as a linear combination of I, A, A 2,, A n 1. In other words, any matrix polynomial f (A) of arbitrary degree can always be expressed as a linear combination of I, A, A 2,, A n 1, i.e., f (A) = β n 1 A n 1 + β n 2 A n β 1 A + β 0 I with appropriate values of β i. 36/77

37 Application of C-H Theorem Computing A 1 Let (s) be the characteristic polynomial of a matrix A R n n. (s) = s n + a n 1 s n 1 + a n 2 s n a 1 s + a 0 From the Cayley-Hamilton theorem, we have (A) = A n + a n 1 A n 1 + a n 2 A n a 1 A + a 0 I = 0. (35) Multiplying A 1 both sides, we have after rearrangement, A 1 = 1 ( A n 1 + a n 1 A n a 2 A + a 1 I ). (36) a 0 37/77

38 Application of C-H Theorem (cont.) Computing A 1 Example A = [ Then the characteristic polynomial is (s) = (s 3)(s 2) 1 = s 2 5s + 5. ] Therefore, A 1 = 1 5 (A 5I ) = 1 5 ([ ] [ ]) [ 2 = ]. 38/77

39 Application of C-H Theorem (cont.) Computing A 1 Proceeding forward, we consider any polynomial f (s), then we can write as f (s) = q(s) (s) + h(s). It implies that f (λ i ) = h(λ i ), for all eigenvalues of A. Applying Cayley-Hamilton theorem, (A) = 0, we have f (A) = q(a) (A) + h(a) = h(a). 39/77

40 Application of C-H Theorem (cont.) Computing A 1 Example Using A given in the previous example, compute f (A) = A 4 + 3A 3 + 2A 2 + A + I. Calculation can be simplified by following. Consider f (s) = q(s) (s) + h(s) = ( s 2 + 8s + 37 ) (s) + (146s 184). }{{} h(s) Thus, f (A) = h(a) = 146A /77

41 Application of C-H Theorem (cont.) Computing A 1 Remark An obvious way of computing h(s) is to carry out long division. However, when long division requires lengthy computation, h(s) can be directly calculated by letting h(s) = β n 1 s n 1 + β n 2 s n β 1 s + β 0. The unknowns β i can be easily calculated by using the relationship f (s) s=λ1 = h(s) s=λi, for i = 1, 2,, n where λ i are eigenvalues of A. This idea is extended to any analytic function f (s) and A with eigenvalues with m multiplicities. 41/77

42 Application of C-H Theorem (cont.) Computing A 1 Theorem Let A be a n n square matrix with characteristic polynomial (λ) = Π m i=1 (s λ i ) n i where n = Π m i=1 n i. Let f (s) be any function and h(s) be a polynomial of degree n 1 then h(s) = β n 1 s n 1 + β n 2 s n β 1 s + β 0. f (A) = h(a) if the coefficients, β i s, of h(s) are chosen such that d l f (s) ds l = d l h(s) ds l, for l = 0, 1,, n i 1; i = 1, 2,, m. s=λi s=λi 42/77

43 Application of C-H Theorem (cont.) Computing A 1 Example Compute A 100 with [ 0 1 A = 1 2 ]. Define f (s) = s 100 h(s) = β 1 s + β 0. Note that (s) = s(s + 2) + 1 = s 2 + 2s + 1 = (s + 1) 2 = 0 and the eigenvalues of A are { 1, 1}. 43/77

44 Application of C-H Theorem (cont.) Computing A 1 Example (cont.) To determine β i, f ( 1) = h( 1) ( 1) 100 = β 1 + β 0 f ( 1) = h ( 1) 100( 1) 99 = β 1. Thus, we have β 1 = 100 and β 0 = = 99, and h(s) = 100s 99. Therefore, from Cayley-Hamilton theorem [ A 100 = h(a) = 100A 99I = ] [ ] [ = ]. 44/77

45 Application of C-H Theorem (cont.) Computing A 1 Example A = To compute e At, we let f (s) = e st and h(s) = β 2 s 2 + β 1 s + β 0. Since the eigenvalues of A are {1, 1, 2}, we evaluate the following. f (s) s=1 = h(s) s=1 e t = β 2 + β 1 + β 0 f (s) s=1 = h (s) s=1 te t = 2β 2 + β 1 f (s) s=2 = h(s) s=2 e 2t = 4β 2 + 2β 1 + β 0. 45/77

46 Application of C-H Theorem (cont.) Computing A 1 Example (cont.) From these, we have β 0 = 2te t + e 2t β 1 = 3te t + 2e t 2e 2t β 2 = e 2t e t te t and h(s) = ( e 2t e t te t) s 2 + ( 3te t + 2e t 2e 2t) s 2te t + e 2t. 46/77

47 Application of C-H Theorem (cont.) Computing A 1 Example (cont.) Therefore, e At = f (A) = h(a) = ( e 2t e t te t) A 2 + ( 3te t + 2e t 2e 2t) A + ( 2te t + e 2t) I = e2t + 2e t 2te t 2e 2t + 2e t 0 e t 0. e 2t e t te t 2e 2t e t Remark Consider two square matrices A 1 and A 2 that are similar. Then f (A 1 ) = f (A 2 ). 47/77

48 Application of C-H Theorem (cont.) Computing A 1 Example Compute e At with A = λ λ λ 2. The characteristic polynomial is (s) = (s λ 1 ) 2 (s λ 2 ) and the eigenvalues of A are {λ 1, λ 1, λ 2 }. Define f (s) = e st and h(s) = β 2 s 2 + β 1 s + β 0. 48/77

49 Application of C-H Theorem (cont.) Computing A 1 Example (cont.) To determine β i, f (s) s=λ1 = h(s) s=λ1 e λ 1t = β 2 λ β 1 λ 1 + β 0 f (s) s=λ1 = h (s) s=λ1 te λ1t = 2β 2 λ 1 + β 1 f (s) s=λ2 = h(s) s=λ2 e λ2t = β 2 λ β 1 λ 2 + β 0. 49/77

50 Application of C-H Theorem (cont.) Computing A 1 Example (cont.) β 2 β 1 β 0 = = = λ2 1 λ 1 1 2λ λ 2 2 λ e λ 1 t λ 1 e λ 1 t e λ 2 t (λ 1 λ 2 ) 2 λ 1 λ 2 (λ 1 λ 2 ) λ 1 (λ 1 λ 2 ) 2 λ 1 + λ 2 2λ 1 λ 1 λ 2 (λ 1 λ 2 ) 2 2λ 1λ 2 λ λ 1 λ 2 λ1 (λ 1 λ 2 ) 2 λ 1 λ 2 (λ 1 λ 2 ) 2 e λ 1 t + (λ 1 λ 2 )te λ 1 t + e λ 2 t (λ 1 λ 2 ) 2 2λ 1 e λ 1 t (λ 2 1 λ2 2 )teλ 1 t 2λ 1 e λ 2 t (λ 1 λ 2 ) 2 (2λ 1 λ 2 λ 2 2 )eλ 1 t + (λ 1 λ 2 )λ 1 λ 2 te λ 1 t + λ 2 1 eλ 2 t (λ 1 λ 2 ) 2 e λ 1 t te λ 1 t e λ 2 t 50/77

51 Application of C-H Theorem (cont.) Computing A 1 Example (cont.) h(s) = e λ 1 t + (λ 1 λ 2 )te λ 1 t + e λ 2 t s 2 + 2λ 1e λ 1 t (λ 2 1 λ 2 2 )teλ 1 t 2λ 1 e λ 2 t s (λ 1 λ 2 ) 2 (λ 1 λ 2 ) 2 + (2λ 1λ 2 λ 2 2 )eλ 1 t + (λ 1 λ 2 )λ 1 λ 2 te λ 1 t + λ 2 1 eλ 2 t (λ 1 λ 2 ) 2 51/77

52 Application of C-H Theorem (cont.) Computing A 1 Example (cont.) f (A) = e At = h(a) e λ 1 t + (λ 1 λ 2 )te λ 1 t + e λ 2 t = (λ 1 λ 2 ) 2 = + 2λ 1e λ 1 t (λ 2 1 λ2 2 )teλ 1 t 2λ 1 e λ 2 t (λ 1 λ 2 ) 2 λ 2 1 2λ λ λ 2 2 λ λ λ 2 + (2λ 1λ 2 λ 2 2 )eλ 1 t + (λ 1 λ 2 )λ 1 λ 2 te λ 1 t + λ 2 1 eλ 2 t (λ 1 λ 2 ) 2 e λ 1 t te λ 1 t 0 0 e λ 1 t e λ 2 t /77

53 Similarity Transformation The state variables describing a system are not unique and the state space representation of a given system depends on the choice of state variables. By choosing different bases for the underlying n ddimensional state space we obtain different nth order representations of the same system. Such systems are called similar. Similar systems have the same transfer function although their state space representations are different. This fact can be exploited to develop equivalent state space representations consists of matrices that display various structural properties or simplify computation. Now consider a state space representation ẋ(t) = Ax(t) + Nu(t) y(t) = Cx(t) + Du(t). 53/77

54 Similarity Transformation (cont.) Define x(t) = Tz(t) where T is a square invertible matrix. Then T ż(t) = ATz(t) + Bu(t) y(t) = CTz(t) + Du(t). or { ż(t) = T 1 ATz(t) + T 1 Bu(t) y(t) = CTz(t) + Du(t) { ż = An z(t) + B n u(t) y(t) = C n z(t) + Du(t). The transfer function of the system may be computed CT ( si T 1 AT ) T 1 B + D = CT [ T 1 (si A)T ] 1 T 1 B + D = CT [ T 1 (si A) 1 T ] T 1 B + D = CTT 1 (si A) 1 TT 1 B + D = C(sI A) 1 B + D which shows that the representations in eqs. (37) and (37) have the same transfer function. 54/77

55 Diagonalization Theorem Consider an n n matrix A with distinct eigenvalues λ 1, λ 2,, λ n. Let x i, i = 1, 2,, n be eigenvectors associated with λ i. Define a n n matrix T = [x 1 x 2 x 3 x n ]. Then T 1 AT = λ 1 λ λn 55/77

56 56/77 Diagonalization (cont.) Proof Since x i is an eigenvector of A associated with λ i, Ax i = λ i x i. So we write Ax i = λ i x i = [x 1 x 2 x i x n ] 0. 0 λ i 0. 0.

57 Diagonalization (cont.) Proof cont. Consequently, A [x 1 x 2 x i x n ] = [x 1 x 2 x i x n ] }{{}}{{} T T λ 1 λ 2... λn. and λ 1 T 1 λ 2 AT =.... λn 57/77

58 Jordan Form We have seen in the previous section that an n n matrix A can be converted through a similarity transformation into a diagonal matrix when the eigenvalues of A are distinct. When the eigenvalues of A are not distinct, that is, some are repeated, it is not always possible to diagonalize A, as before. In this subsection, we deal with this case, and develop the so-called Jordan form of A which is the maximally diagonal form that can be obtained. The characteristic polynomial of the n n real or complex matrix A, denoted Π(s), is: Π(s) = det(si A) Write (37) in the factored form = s n + a n 1 s n a 1 s + a 0. (37) Π(s) = (s λ 1 ) n 1 (s λ 2 ) n (s λ p ) np (38) 58/77

59 Jordan Form (cont.) where the λ i are distinct, that is, λ i λ j, for i j and n 1 + n n p = n. It is easy to establish the following. 59/77

60 Jordan Form (cont.) Theorem There exists an n n nonsingular matrix T such that A 1 T 1 A 2 AT =... Ap where A i is n i n i and det (si A i ) = (s λ i ) n i, for i = 1, 2,, p. 60/77

61 Jordan Form (cont.) In the remaining part of the section, we develop the Jordan decomposition of the matrices A i. Therefore, we now consider, without loss of generality, an n n matrix A such that Π(s) = det(si A) = (s λ) n. (39) The Jordan structure of such an A matrix may be found by forming the matrices (A λi ) k, for k = 0, 1, 2,, n. (40) Let N k denote the null space of (A λi ) k : { } N k = x (A λi ) k x = 0 (41) and denote the dimension of N k by ν k : dimension N k = ν k, for k = 0, 1, 2,, n + 1. (42) 61/77

62 Jordan Form (cont.) We remark that 0 = ν 0 ν 1 ν n 1 ν n = n = ν n+1. (43) The following theorem gives the Jordan structure of A. 62/77

63 Jordan Form (cont.) Theorem Given the n n matrix A with det(si A) = (s λ) n, (44) there exists an n n matrix T with det(t ) 0, such that T 1 AT = A 1 A 2... A r where A j = λ λ λ λ is called a Jordan block n j n j., for j = 1, 2,, r 63/77

64 Jordan Form (cont.) The number r and sizes n j of the Jordan blocks A i can be found from the formula given in the following Lemma. Lemma Under the assumption of the previous theorem, the number of Jordan blocks of size k k is given by 2ν k ν k 1 ν k+1, for k = 1, 2,, n. 64/77

65 Jordan Form (cont.) Example Let A be an matrix with Suppose that det(si A) = (s λ) 11. ν 1 = 6, ν 2 = 9, ν 3 = 10, ν 4 = 11 = ν 5 = ν 6 = = v 11 = ν /77

66 Jordan Form (cont.) Example (cont.) Then the number of Jordan blocks: a) of size 1 1 = 2ν 1 ν 0 ν 2 = 3 b) of size 2 2 = 2ν 2 ν 1 ν 3 = 2 c) of size 3 3 = 2ν 3 ν 2 ν 4 = 0 d) of size 4 4 = 2ν 4 ν 3 ν 5 = 1 e) of size 5 5 = 2ν 5 ν 4 ν 6 = 0. k) of size = 2ν 11 ν 12 ν 10 = 0. 66/77

67 Jordan Form (cont.) Example (cont.) Therefore, the Jordan form of A is as shown below: λ λ λ λ A = λ 1 0 λ λ 1 0 λ λ λ λ 67/77

68 Jordan Form (cont.) This result can now be applied to each of the blocks A i in the previous theorem to obtain the complete Jordan decomposition in the general case. It is best to illustrate the procedure with an example. Example Let A be a matrix with with λ i being distinct. det(si A) = (s λ 1 ) 8 (s λ 2 ) 4 (s λ 3 ) 3 68/77

69 Jordan Form (cont.) Example (cont.) We compute (A λ i I ) k, k = 0, 1, 2,, 8, 9 and set h k = dimn (A λ 1 I ) k, k = 0, 1, 2,, 9. Similarly, let i j = dimn (A λ 2 I ) j, j = 0, 1, 2,, 5 and l s = dimn (A λ 3 I ) s, s = 0, 1, 2, 3, 4. 69/77

70 Jordan Form (cont.) Example (cont.) Suppose, for example, h 0 = 0, h 1 = 2, h 2 = 4, h 3 = 6, h 4 = 7, h 5 = h 6 = h 7 = h 8 = 8 i 0 = 0, i 1 = 3, i 2 = 4, i 3 = i 4 = i 5 = 4 l 0 = 0, l 1 = 1, l 2 = 2, l 3 = l 4 = 3. By Theorem 29, the Jordan form of A has the following structure. A T 1 AT = 0 A A 3 with A 1 R 8 8, A 2 R 4 4 and A 3 R 3 3. Furthermore, the detailed structure of A 1, by Theorem 30, consists of the following numbers of Jordan blocks. 70/77

71 Jordan Form (cont.) Example (cont.) a) of size 1 1 = 2h 1 h 0 h 2 = 0 blocks b) of size 2 2 = 2h 2 h 1 h 3 = 0 blocks c) of size 3 3 = 2h 3 h 2 h 4 = 1 blocks d) of size 4 4 = 2h 4 h 3 h 5 = 0 blocks e) of size 5 5 = 2h 5 h 4 h 6 = 1 blocks f) of size 6 6 = 2h 6 h 5 h 7 = 0 blocks g) of size 7 7 = 2h 7 h 6 h 8 = 0 blocks h) of size 8 8 = 2h 8 h 7 h 9 = 0 blocks 71/77

72 Jordan Form (cont.) Example (cont.) Similarly, the numbers of Jordan blocks in A 2 are: a) of size 1 1 = 2i 1 i 0 i 2 = 2 blocks b) of size 2 2 = 2i 2 i 1 i 3 = 1 blocks c) of size 3 3 = 2i 3 i 2 i 4 = 0 blocks d) of size 4 4 = 2i 4 i 3 i 5 = 0 blocks and the numbers of Jordan blocks in A 3 are: a) of size 1 1 = 2l 1 l 0 l 2 = 0 blocks b) of size 2 2 = 2l 2 l 1 l 3 = 0 blocks c) of size 3 3 = 2l 3 l 2 l 4 = 1 blocks 72/77

73 Jordan Form (cont.) Example (cont.) With this information, we know that the Jordan form of A is given by T 1 AT = λ λ λ λ λ 1 λ λ λ 1 λ λ 2 λ 2 2 λ λ λ 3 = A J 73/77

74 Finding the Transformation Matrix T Once the Jordan form is found, it is relatively easy to find the transformation matrix. We illustrate using the previous example. Write T in terms of its columns. T = t 1 t 2 t 3 t 4 t 5 }{{} block 1 t 6 t 7 t 8 }{{} block 2 t 9 t 10 }{{} block 3 t 11 }{{} block 4 t 12 }{{} block 5 t 13 t 14 t }{{ 15. } block 6 74/77

75 Finding the Transformation Matrix T (cont.) Then we have as well as At 1 = λ 1 t 1 or (A λ 1 I ) t 1 = 0 At 2 = λ 1 t 2 + t 1 or (A λ 1 I ) t 2 = t 1 At 3 = λ 1 t 3 + t 2 or (A λ 1 I ) t 3 = t 2 (45) At 4 = λ 1 t 4 + t 3 or (A λ 1 I ) t 4 = t 3 At 5 = λ 1 t 5 + t 4 or (A λ 1 I ) t 5 = t 4 At 6 = λ 1 t 6 or (A λ 1 I ) t 6 = 0 At 7 = λ 1 t 7 + t 6 or (A λ 1 I ) t 7 = t 6 (46) At 8 = λ 1 t 8 + t 7 or (A λ 1 I ) t 8 = t 7 75/77

76 Finding the Transformation Matrix T (cont.) Therefore, we need to find two linearly independent vectors t 1, t 6 in N (A λ 1 I ) such that the chains of vectors t 1, t 2, t 3, t 4, t 5 and t 6, t 7, t 8 are linearly independent. Alternatively, we can attempt to find t 5 such that (A λ i I ) 5 t 5 = 0 and find t 4, t 3, t 2, t 1 from the set in eq. (45). Similarly, we can find t 8, such that (A λ i I ) 3 t 8 = 0 and t 6, t 7, t 8 found from eq. (46) are linearly independent. 76/77

77 Finding the Transformation Matrix T (cont.) Moving to the blocks associated with λ 2, we see that we need to find t 9, t 11, t 12 so that (A λ 2 I ) t 9 = 0 (A λ 2 I ) t 11 = 0 (A λ 2 I ) t 12 = 0 and (t 9, t 10, t 11, t 12 ) are independent with Likewise, we find t 13, such that At 10 = λ 2 t 10 + t 9. (A λ 3 I ) t 13 = 0 (A λ 3 I ) t 14 = t 13 (A λ 3 I ) t 15 = t 14 with (t 13, t 14, t 15 ) linearly independent. In each of the above cases, the existence of the vectors t ij are guaranteed by the existence of the Jordan forms. 77/77

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

Autonomous system = system without inputs

Autonomous system = system without inputs Autonomous system = system without inputs State space representation B(A,C) = {y there is x, such that σx = Ax, y = Cx } x is the state, n := dim(x) is the state dimension, y is the output Polynomial representation

More information

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/

More information

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26 1/26 ECEN 605 LINEAR SYSTEMS Lecture 8 Invariant Subspaces Subspaces Let ẋ(t) = A x(t) + B u(t) y(t) = C x(t) (1a) (1b) denote a dynamic system where X, U and Y denote n, r and m dimensional vector spaces,

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky Equilibrium points and linearization Eigenvalue decomposition and modal form State transition matrix and matrix exponential Stability ELEC 3035 (Part

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

Diagonalization. P. Danziger. u B = A 1. B u S.

Diagonalization. P. Danziger. u B = A 1. B u S. 7., 8., 8.2 Diagonalization P. Danziger Change of Basis Given a basis of R n, B {v,..., v n }, we have seen that the matrix whose columns consist of these vectors can be thought of as a change of basis

More information

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67 1/67 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 6 Mathematical Representation of Physical Systems II State Variable Models for Dynamic Systems u 1 u 2 u ṙ. Internal Variables x 1, x 2 x n y 1 y 2. y m Figure

More information

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft Control Systems Design, SC4026 SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft Lecture 4 Controllability (a.k.a. Reachability) vs Observability Algebraic Tests (Kalman rank condition & Hautus test) A few

More information

Control Systems. Laplace domain analysis

Control Systems. Laplace domain analysis Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output

More information

Intro. Computer Control Systems: F8

Intro. Computer Control Systems: F8 Intro. Computer Control Systems: F8 Properties of state-space descriptions and feedback Dave Zachariah Dept. Information Technology, Div. Systems and Control 1 / 22 dave.zachariah@it.uu.se F7: Quiz! 2

More information

Advanced Control Theory

Advanced Control Theory State Space Solution and Realization chibum@seoultech.ac.kr Outline State space solution 2 Solution of state-space equations x t = Ax t + Bu t First, recall results for scalar equation: x t = a x t + b

More information

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Control Systems Design, SC4026 SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Lecture 4 Controllability (a.k.a. Reachability) and Observability Algebraic Tests (Kalman rank condition & Hautus test) A few

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4) A Exercise State will have dimension 5. One possible choice is given by y and its derivatives up to y (4 x T (t [ y(t y ( (t y (2 (t y (3 (t y (4 (t ] T With this choice we obtain A B C [ ] D 2 3 4 To

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Lecture 2 and 3: Controllability of DT-LTI systems

Lecture 2 and 3: Controllability of DT-LTI systems 1 Lecture 2 and 3: Controllability of DT-LTI systems Spring 2013 - EE 194, Advanced Control (Prof Khan) January 23 (Wed) and 28 (Mon), 2013 I LTI SYSTEMS Recall that continuous-time LTI systems can be

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Introduction to Modern Control MT 2016

Introduction to Modern Control MT 2016 CDT Autonomous and Intelligent Machines & Systems Introduction to Modern Control MT 2016 Alessandro Abate Lecture 2 First-order ordinary differential equations (ODE) Solution of a linear ODE Hints to nonlinear

More information

EC Control Engineering Quiz II IIT Madras

EC Control Engineering Quiz II IIT Madras EC34 - Control Engineering Quiz II IIT Madras Linear algebra Find the eigenvalues and eigenvectors of A, A, A and A + 4I Find the eigenvalues and eigenvectors of the following matrices: (a) cos θ sin θ

More information

Solution via Laplace transform and matrix exponential

Solution via Laplace transform and matrix exponential EE263 Autumn 2015 S. Boyd and S. Lall Solution via Laplace transform and matrix exponential Laplace transform solving ẋ = Ax via Laplace transform state transition matrix matrix exponential qualitative

More information

Control Systems. Frequency domain analysis. L. Lanari

Control Systems. Frequency domain analysis. L. Lanari Control Systems m i l e r p r a in r e v y n is o Frequency domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic

More information

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u) Observability Dynamic Systems Lecture 2 Observability Continuous time model: Discrete time model: ẋ(t) = f (x(t), u(t)), y(t) = h(x(t), u(t)) x(t + 1) = f (x(t), u(t)), y(t) = h(x(t)) Reglerteknik, ISY,

More information

Solution for Homework 5

Solution for Homework 5 Solution for Homework 5 ME243A/ECE23A Fall 27 Exercise 1 The computation of the reachable subspace in continuous time can be handled easily introducing the concepts of inner product, orthogonal complement

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability

More information

Lecture Notes for Math 524

Lecture Notes for Math 524 Lecture Notes for Math 524 Dr Michael Y Li October 19, 2009 These notes are based on the lecture notes of Professor James S Muldowney, the books of Hale, Copple, Coddington and Levinson, and Perko They

More information

Topic # /31 Feedback Control Systems

Topic # /31 Feedback Control Systems Topic #7 16.30/31 Feedback Control Systems State-Space Systems What are the basic properties of a state-space model, and how do we analyze these? Time Domain Interpretations System Modes Fall 2010 16.30/31

More information

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation:

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation: SYSTEMTEORI - ÖVNING 1 GIANANTONIO BORTOLIN AND RYOZO NAGAMUNE In this exercise, we will learn how to solve the following linear differential equation: 01 ẋt Atxt, xt 0 x 0, xt R n, At R n n The equation

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS EE0 Linear Algebra: Tutorial 6, July-Dec 07-8 Covers sec 4.,.,. of GS. State True or False with proper explanation: (a) All vectors are eigenvectors of the Identity matrix. (b) Any matrix can be diagonalized.

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40

More information

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)

More information

State Variable Analysis of Linear Dynamical Systems

State Variable Analysis of Linear Dynamical Systems Chapter 6 State Variable Analysis of Linear Dynamical Systems 6 Preliminaries In state variable approach, a system is represented completely by a set of differential equations that govern the evolution

More information

Homogeneous Linear Systems of Differential Equations with Constant Coefficients

Homogeneous Linear Systems of Differential Equations with Constant Coefficients Objective: Solve Homogeneous Linear Systems of Differential Equations with Constant Coefficients dx a x + a 2 x 2 + + a n x n, dx 2 a 2x + a 22 x 2 + + a 2n x n,. dx n = a n x + a n2 x 2 + + a nn x n.

More information

Homogeneous and particular LTI solutions

Homogeneous and particular LTI solutions Homogeneous and particular LTI solutions Daniele Carnevale Dipartimento di Ing. Civile ed Ing. Informatica (DICII), University of Rome Tor Vergata Fondamenti di Automatica e Controlli Automatici A.A. 2014-2015

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Module 07 Controllability and Controller Design of Dynamical LTI Systems Module 07 Controllability and Controller Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha October

More information

vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow,...

vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow,... 6 Eigenvalues Eigenvalues are a common part of our life: vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow, The simplest

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

Linear Algebra II Lecture 13

Linear Algebra II Lecture 13 Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has Eigen Methods Math 246, Spring 2009, Professor David Levermore Eigenpairs Let A be a real n n matrix A number λ possibly complex is an eigenvalue of A if there exists a nonzero vector v possibly complex

More information

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Linear Algebra Exercises

Linear Algebra Exercises 9. 8.03 Linear Algebra Exercises 9A. Matrix Multiplication, Rank, Echelon Form 9A-. Which of the following matrices is in row-echelon form? 2 0 0 5 0 (i) (ii) (iii) (iv) 0 0 0 (v) [ 0 ] 0 0 0 0 0 0 0 9A-2.

More information

Perspective. ECE 3640 Lecture 11 State-Space Analysis. To learn about state-space analysis for continuous and discrete-time. Objective: systems

Perspective. ECE 3640 Lecture 11 State-Space Analysis. To learn about state-space analysis for continuous and discrete-time. Objective: systems ECE 3640 Lecture State-Space Analysis Objective: systems To learn about state-space analysis for continuous and discrete-time Perspective Transfer functions provide only an input/output perspective of

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of

More information

Linear dynamical systems with inputs & outputs

Linear dynamical systems with inputs & outputs EE263 Autumn 215 S. Boyd and S. Lall Linear dynamical systems with inputs & outputs inputs & outputs: interpretations transfer function impulse and step responses examples 1 Inputs & outputs recall continuous-time

More information

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS September 26, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Discrete and continuous dynamic systems

Discrete and continuous dynamic systems Discrete and continuous dynamic systems Bounded input bounded output (BIBO) and asymptotic stability Continuous and discrete time linear time-invariant systems Katalin Hangos University of Pannonia Faculty

More information

ACM 104. Homework Set 5 Solutions. February 21, 2001

ACM 104. Homework Set 5 Solutions. February 21, 2001 ACM 04 Homework Set 5 Solutions February, 00 Franklin Chapter 4, Problem 4, page 0 Let A be an n n non-hermitian matrix Suppose that A has distinct eigenvalues λ,, λ n Show that A has the eigenvalues λ,,

More information

Periodic Linear Systems

Periodic Linear Systems Periodic Linear Systems Lecture 17 Math 634 10/8/99 We now consider ẋ = A(t)x (1) when A is a continuous periodic n n matrix function of t; i.e., when there is a constant T>0 such that A(t + T )=A(t) for

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018 January 18, 2018 Contents 1 2 3 4 Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. Review 1 We looked at general

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Modal Decomposition and the Time-Domain Response of Linear Systems 1

Modal Decomposition and the Time-Domain Response of Linear Systems 1 MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING.151 Advanced System Dynamics and Control Modal Decomposition and the Time-Domain Response of Linear Systems 1 In a previous handout

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

6 Linear Equation. 6.1 Equation with constant coefficients

6 Linear Equation. 6.1 Equation with constant coefficients 6 Linear Equation 6.1 Equation with constant coefficients Consider the equation ẋ = Ax, x R n. This equating has n independent solutions. If the eigenvalues are distinct then the solutions are c k e λ

More information

Module 03 Linear Systems Theory: Necessary Background

Module 03 Linear Systems Theory: Necessary Background Module 03 Linear Systems Theory: Necessary Background Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

EE263: Introduction to Linear Dynamical Systems Review Session 6

EE263: Introduction to Linear Dynamical Systems Review Session 6 EE263: Introduction to Linear Dynamical Systems Review Session 6 Outline diagonalizability eigen decomposition theorem applications (modal forms, asymptotic growth rate) EE263 RS6 1 Diagonalizability consider

More information

Math Camp Notes: Linear Algebra II

Math Camp Notes: Linear Algebra II Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Robust Control 2 Controllability, Observability & Transfer Functions

Robust Control 2 Controllability, Observability & Transfer Functions Robust Control 2 Controllability, Observability & Transfer Functions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University /26/24 Outline Reachable Controllability Distinguishable

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Chapter 2: Time Series Modelling. Chapter 3: Discrete Models. Notes. Notes. Intro to the Topic Time Series Discrete Models Growth and Decay

Chapter 2: Time Series Modelling. Chapter 3: Discrete Models. Notes. Notes. Intro to the Topic Time Series Discrete Models Growth and Decay Chapter 2: Modelling 6 / 71 Chapter 3: 7 / 71 Glossary of Terms Here are some of the types of symbols you will see in this section: Name Symbol Face Examples Vector Lower-case bold u, v, p Matrix Upper-case

More information

Characteristic Polynomial

Characteristic Polynomial Linear Algebra Massoud Malek Characteristic Polynomial Preleminary Results Let A = (a ij ) be an n n matrix If Au = λu, then λ and u are called the eigenvalue and eigenvector of A, respectively The eigenvalues

More information

Differential Equations and Modeling

Differential Equations and Modeling Differential Equations and Modeling Preliminary Lecture Notes Adolfo J. Rumbos c Draft date: March 22, 2018 March 22, 2018 2 Contents 1 Preface 5 2 Introduction to Modeling 7 2.1 Constructing Models.........................

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Eigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore

Eigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore Eigenpairs and Diagonalizability Math 40, Spring 200, Professor David Levermore Eigenpairs Let A be an n n matrix A number λ possibly complex even when A is real is an eigenvalue of A if there exists a

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0. ) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

More information

Linear Algebra II Lecture 22

Linear Algebra II Lecture 22 Linear Algebra II Lecture 22 Xi Chen University of Alberta March 4, 24 Outline Characteristic Polynomial, Eigenvalue, Eigenvector and Eigenvalue, Eigenvector and Let T : V V be a linear endomorphism. We

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

ME8281-Advanced Control Systems Design

ME8281-Advanced Control Systems Design ME8281 - Advanced Control Systems Design Spring 2016 Perry Y. Li Department of Mechanical Engineering University of Minnesota Spring 2016 Lecture 4 - Outline 1 Homework 1 to be posted by tonight 2 Transition

More information

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur Characteristic Equation Cayley-Hamilton Cayley-Hamilton Theorem An Example Euler s Substitution for u = A u The Cayley-Hamilton-Ziebur

More information

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set 6 Problems marked (T) are for discussions in Tutorial sessions. 1. Find the eigenvalues

More information

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM WILLIAM A. ADKINS AND MARK G. DAVIDSON Abstract. The Cayley Hamilton theorem on the characteristic polynomial of a matrix A and Frobenius

More information

Controllability, Observability, Full State Feedback, Observer Based Control

Controllability, Observability, Full State Feedback, Observer Based Control Multivariable Control Lecture 4 Controllability, Observability, Full State Feedback, Observer Based Control John T. Wen September 13, 24 Ref: 3.2-3.4 of Text Controllability ẋ = Ax + Bu; x() = x. At time

More information