21 Linear State-Space Representations
|
|
- Randell Carroll
- 5 years ago
- Views:
Transcription
1 ME 132, Spring 25, UC Berkeley, A Packard Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may have many inputs, and many outputs Specifically, m inputs (denoted by d) and q outputs (denoted by e), and the input/output behavior is governed by a set of n, 1st order, coupled differential equations, of the form ẋ 1 (t) ẋ 2 (t) ẋ n (t) e 1 (t) e 2 (t) e q (t) = f 1 (x 1 (t), x 2 (t),, x n (t), d 1 (t), d 2 (t),, d m (t), t) f 2 (x 1 (t), x 2 (t),, x n (t), d 1 (t), d 2 (t),, d m (t), t) f n (x 1 (t), x 2 (t),, x n (t), d 1 (t), d 2 (t),, d m (t), t) h 1 (x 1 (t), x 2 (t),, x n (t), d 1 (t), d 2 (t),, d m (t), t) h 2 (x 1 (t), x 2 (t),, x n (t), d 1 (t), d 2 (t),, d m (t), t) h q (x 1 (t), x 2 (t),, x n (t), d 1 (t), d 2 (t),, d m (t), t) where the functions f i, h i are given functions of the n variables x 1, x 2,, x n, the m variables d 1, d 2,, d m and also explicit functions of t For shorthand, we write (79) as (79) ẋ(t) = f (x(t), d(t), t) e(t) = h (x(t), d(t), t) (8) Given an initial condition vector x, and a forcing function d(t) for t t, we wish to solve for the solutions x 1 (t) e 1 (t) x 2 (t) e 2 (t) x(t) = x n (t), e(t) = on the interval t, t F, given the initial condition and the input forcing function d( ) x (t ) = x e q (t) If the functions f and u are reasonably well behaved in x and t, then the solution exists, is continuous, and differentiable at all points 211 Linearity of solution Consider a vector differential equation ẋ(t) = A(t)x(t) + B(t)d(t) x(t ) = x (81)
2 ME 132, Spring 25, UC Berkeley, A Packard 188 where for each t, A(t) R n n, B(t) R n m, and for each time t, x(t) R n and d(t) R m Claim: ( The ) solution function x( ), on any interval t, t 1 is a linear function of the pair x, d(t) t t 1 The precise meaning of this statement is as follows: Pick any constants α, β R, if x 1 is the solution to (81) starting from initial condition x 1, (at t ) and forced with input d 1, and if x 2 is the solution to (81) starting from initial condition x 2, (at t ) and forced with input d 2, then αx 1 + βx 2 is the solution to (81) starting from initial condition αx 1, + βx 2, (at t ) and forced with input αd 1 (t) + βd 2 (t) This can be easily checked, note that for every time t, we have ẋ 1 (t) = A(t)x 1 (t) + B(t)d 1 (t) ẋ 2 (t) = A(t)x 2 (t) + B(t)d 2 (t) Multiply the first equation by the constant α and the second equation by β, and add them together, giving d dt 1(t) + βx 2 (t) = αẋ 1 (t) + βẋ 2 (t) = α Ax 1 (t) + Bd 1 (t) + β Ax 2 (t) + Bd 2 (t) = A αx 1 (t) + βx 2 (t) + B αd 1 (t) + βd 2 (t) which shows that the linear combination αx 1 +βx 2 does indeed solve the differential equation The initial condition is easily checked Finally, the existence and uniqueness theorem for differential equations tells us that this (αx 1 + βx 2 ) is the only solution which satisfies both the differential equation and the initial conditions Linearity of the solution in the pair (initial condition, forcing function) is often called The Principal of Superposition and is an extremely useful property of linear systems 212 Time-Invariance A separate issue, unrelated to linearity, is time-invariance A system described by ẋ(t) = f(x(t), d(t), t) is called time-invariant if (roughly) the behavior of the system does depend explicitly on the absolute time In other words, shifting the time axis does not affect solutions Precisely, suppose that x 1 is a solution for the system, starting from x 1 at t = t subject to the forcing d 1 (t), defined for t t
3 ME 132, Spring 25, UC Berkeley, A Packard 189 Now, let x be the solution to the equations starting from x at t = t +, subject to the forcing d(t) := d(t ), defined for t t + Suppose that for all choices of t,, x and d( ), the two responses are related by x(t) = x(t ) for all t t + Then the system described by ẋ(t) = f(x(t), d(t), t) is called time-invariant In practice, the easiest manner to recognize time-invariance is that the right-hand side of the state equations (the first-order differential equations governing the process) do not explicitly depend on time For instance, the system is nonlinear, yet time-invariant ẋ 1 (t) = 2 x 2 (t) sin x 1 (t)x 2 (t)d 2 (t) ẋ 2 (t) = x 2 (t) x 2 (t)d 1 (t) For the next few sections, we focus entirely on systems that are linear and time-invariant This is not any different than the systems governed by our standard ODE model, introduced in Section 7 However, the representation is different, namely lots of coupled 1st order equations, rather than one big n th order equation We will derive all of the same ideas from this perspective: general solution, stability, frequency response, transfer function
4 ME 132, Spring 25, UC Berkeley, A Packard Matrix Exponential Recall that for the scalar differential equation ẋ(t) = ax(t) + bu(t) x(t ) = x the solution for t t is given by the formula x(t) = e a(t t ) x + t t e a(t τ) bu(τ)dτ What makes this work is the special structure of the exponential function, namely that Now consider a vector differential equation d dt eat = ae at ẋ(t) = Ax(t) + Bu(t) x(t ) = x (82) where A R n n, B R n m are constant matrices, and for each time t, x(t) R n and u(t) R m The solution can be derived by proceeding analogously to the scalar case For a matrix A C n n define a matrix function of time, e At C n n as e At := t k k=k! = I + ta + t2 2! A2 + t3 3! A3 + This is exactly the same as the definition in the scalar case Now, for any T >, every element of this matrix power series converges absolutely and uniformly on the interval, T Hence, it can be differentiated term-by-term to correctly get the derivative Therefore d dt eat := k t k 1 A k k= k! = A + ta t2 2! A3 + = A ( I + ta + t2 2! A2 + ) = Ae At This is the most important property of the function e At Also, in deriving this result, A could have been pulled out of the summation on either side Summarizing these important identities, e A = I n, d dt eat = Ae At = e At A So, the matrix exponential has properties similar to the scalar exponential function However, there are two important facts to watch out for:
5 ME 132, Spring 25, UC Berkeley, A Packard 191 WARNING: Let a ij denote the (i, j)th entry of A The (i, j)th entry of e At IS NOT EQUAL TO e a ijt This is most convincingly seen with a nontrivial example Consider A = A few calculations show that A 2 = 1 2 A 3 = 1 3 A k = 1 k The definition for e At is e At = I + ta + t2 2! A2 + t3 3! A3 + Plugging in, and evaluating on a term-by-term basis gives e At = 1 + t + t2 2! 3! + t + 2 t2 + 3 t3 2! 3! t + t2 2! 3! The (1, 1) and (2, 2) entries are easily seen to be the power series for e t, and the (2, 1) entry is clearly After a few manipulations, the (1, 2) entry is te t Hence, in this case e At = et te t e t which is very different than the element-by-element exponentiation of At, WARNING: In general, ea 11t e a 21t e a 12t = et e t e a 22t 1 e t e (A 1+A 2 )t e A 1t e A 2t unless t = (trivial) or A 1 A 2 = A 2 A 1 However, the identity e A(t 1+t 2 ) = e At 1 e At 2 is always true Hence e At e At = e At e At = I for all matrices A and all t, and therefore for all A and t, e At is invertible
6 ME 132, Spring 25, UC Berkeley, A Packard Diagonal A If A C n n is diagonal, then e At is easy to compute Specifically, if A is diagonal, it is easy to see that for any k, β 1 β 1 k β A = 2 A k β k = 2 β n βn k In the power series definition for e At, any off-diagonal terms are identically zero, and the i th diagonal term is simply e At ii = 1 + tβ i + t2 2! β2 i + t3 3! β3 i + = e β it 2132 Block Diagonal A If A 1 C n 1 n 1 and A 2 C n 2 n 2, define A := A 1 A 2 Question: How is e At related to e A 1t and e A 2t? Very simply note that for any k, A k = A 1 A 2 k Hence, the power series definition for e At gives e At = ea 1t e A 2t = Ak 1 A k Effect of Similarity Transformations For any invertible T C n n, Equivalently, e T 1 AT t = T 1 e At T T e T 1 AT t T 1 = e At This can easily be shown from the power series definition e T 1 AT t = I + t ( T 1 AT ) + t2 ( T 1 AT ) 2 t 3 ( + T 1 AT ) 3 + 2! 3!
7 ME 132, Spring 25, UC Berkeley, A Packard 193 It is easy to verify that for every integer k ( T 1 AT ) k = T 1 A k T Hence, we have (with I written as T 1 IT ) e T 1 AT t = T 1 IT + tt 1 AT + t2 2! T 1 A 2 T + t3 3! T 1 A 3 T + Pull T 1 out on the left side, and T on the right to give ( ) e T 1 AT t = T 1 I + ta + t2 2! A2 + t3 3! A3 + T which is simply as desired e T 1 AT t = T 1 e At T 2134 Examples Given β R, define Calculate and hence for any k, A 2k = ( 1)k β 2k ( 1) k β 2k A := A 2 = β β β2 β 2, A 2k+1 ( 1) = k β 2k+1 ( 1) k+1 β 2k+1 Therefore, we can write out the first few terms of the power series for e At as e At 1 1 = 2 β2 t ! β4 t 4 βt 1 3! β3 t ! β5 t 5 βt + 1 3! β3 t 3 1 5! β5 t β2 t ! β4 t 4 which is recognized as e At = cos βt sin βt sin βt cos βt Similarly, suppose α, β R, and A := α β β α
8 ME 132, Spring 25, UC Berkeley, A Packard 194 Then, A can be decomposed as A = A 1 + A 2, where A 1 := α α, A 2 := Note: in this special case, A 1 A 2 = A 2 A 1, hence e (A 1+A 2 )t = e A 1t e A 2t β β Since A 1 is diagonal, we know e A 1t, and e A 2t follows from our previous example Hence e At = This is an important case to remember Finally, suppose λ F, and eαt cos βt e αt sin βt e αt sin βt e αt cos βt A = λ 1 λ We did a similar example in the first Warning of section 213 A few calculations show that A k = λk kλ k 1 λ k Hence, the power series gives e At = eλt te λt e λt 214 Problems 1 Determine e At for the matrices A = Plot the solution to ẋ(t) = Ax(t) from the initial conditions x() = 1, x() = 1, x() = 1, x() = 1 Plot this in 2 different manners: first, two plots (using subplot) of x 1 (t) versus t, and x 2 (t) versus t; and in another single plot of x 2 versus x 1 In each plot, there should be 4 lines Use different linetypes (or colors) and make sure they are consistent across all of the figures
9 ME 132, Spring 25, UC Berkeley, A Packard Solution To State Equations The matrix exponential is extremely important in the solution of the vector differential equation ẋ(t) = Ax(t) + Bu(t) (83) starting from the initial condition x(t ) = x Now, consider the original equation in (83) We can derive the solution to the forced equation using the integrating factor methid, proceeding in the same manner as in the scalar case, with extra care for the matrix-vector operations Suppose a function x satisfies (83) Multiply both sides by e At to give Move one term to the left, leaving, e At ẋ(t) = e At Ax(t) + e At Bu(t) e At Bu(t) = e At ẋ(t) e At Ax(t) = e At ẋ(t) Ae At x(t) = d dt e At x(t) (84) Since these two functions are equal at every time, we can integrate them over the interval t t 1 Note that the right-hand side of (84) is an exact derivative, giving t1 t1 e At d Bu(t)dt = e At x(t) dt t t dt = e At x(t) t 1 t = e At 1 x(t 1 ) e At x(t ) Note that x(t ) = x Also, multiply both sides by e At 1, to yield This is rearranged into e At t1 1 t e At Bu(t)dt = x(t 1 ) e A(t 1 t ) x x(t 1 ) = e A(t 1 t ) x + t1 t e A(t 1 t) Bu(t)dt Finally, switch variable names, letting τ be the variable of integration, and letting t be the right end point (as opposed to t 1 ) In these new letters, the expression for the solution of the (83) for t t, subject to initial condition x(t ) = x is x(t) = e A(t t ) x + t t e A(t τ) Bu(τ)dτ consisting of a free and forced response
10 ME 132, Spring 25, UC Berkeley, A Packard Eigenvalues, eigenvectors, stability 2151 Diagonalization: Motivation Recall two facts from Section 213: For diagonal matrices Λ F n n, λ 1 e λ 1t λ 2 Λ = e Λt e λ 2t = λ n e λnt and: If A F n n, and T F n n is invertible, and à := T 1 AT, then e At = T eãt T 1 Clearly, for a general A F n n, we need to study the invertible transformations T F n n such that T 1 AT is a diagonal matrix Suppose that T is invertible, and Λ is a diagonal matrix, and T 1 AT = Λ Moving the T 1 to the other side of the equality gives AT = T Λ Let t i denote the i th column of the matrix T SInce T is assumed to be invertible, none of the columns of T can be identically zero, hence t i Θ n Also, let λ i denote the (i, i) th entry of Λ The i th column of the matrix equation AT = T Λ is just At i = t i λ i = λ i t i This observation leads to the next section 2152 Eigenvalues Definition: Given a matrix A F n n A complex number λ is an eigenvalue of A if there is a nonzero vector v C n such that Av = vλ = λv The nonzero vector v is called an eigenvector of A associated with the eigenvalue λ Remark: Consider the differential equation ẋ(t) = Ax(t), with initial condition x() = v Then x(t) = ve λt is the solution (check that it satisfies initial condition and differential equation) So, an eigenvector is direction in the state-space such that if you start in the direction of the eigenvector, you stay in the direction of the eigenvector Note that if λ is an eigenvalue of A, then there is a vector v C n, v Θ n such that Av = vλ = (λi) v Hence (λi A) v = Θ n
11 ME 132, Spring 25, UC Berkeley, A Packard 197 Since v Θ n, it must be that det (λi A) = For an n n matrix A, define a polynomial, p A ( ), called the characteristic polynomial of A by p A (s) := det (λi A) Here, the symbol s is simply the indeterminate variable of the polynomial For example, take A = Straightforward manipulation gives p A (s) = s 3 s 2 + 3s 2 Hence, we have shown that the eigenvalues of A are necessarily roots of the equation For a general n n matrix A, we will write p A (s) = p A (s) = s n + a 1 s n a n 1 s + a n where the a 1, a 2,, a n are complicated products and sums involving the entries of A Since the characteristic polynomial of an n n matrix is a n th order polynomial, the equation p A (s) = has at most n distinct roots (some roots could be repeated) Therefore, a n n matrix A has at most n distinct eigenvalues Conversely, suppose that λ C is a root of the polynomial equation Question: Is λ an eigenvalue of A? p A (s) s=λ = Answer: Yes Since p A (λ) =, it means that det (λi A) = Hence, the matrix λi A is singular (not invertible) Therefore, by the matrix facts, the equation (λi A) v = Θ n has a nonzero solution vector v (which you can find by Gaussian elimination) This means that λv = Av for a nonzero vector v, which means that λ is an eigenvalue of A, and v is an associated eigenvector We summarize these facts as:
12 ME 132, Spring 25, UC Berkeley, A Packard 198 A is a n n matrix The characteristic polynomial of A is p A (s) := det (si A) = s n + a 1 s n a n 1 s + a n A complex number λ is an eigenvalue of A if and only if λ is a root of the characteristic equation p a (λ) = Next, we have a useful fact from linear algebra: Suppose A is a given n n matrix, and (λ 1, v 1 ), (λ 2, v 2 ),, (λ n, v n ) are eigenvalue/eigenvector pairs So, for each i, v i Θ n and Av i = v i λ i Fact: If all of the {λ i } n i=1 are distinct, then the set of vectors {v 1, v 2,, v n } are a linearly independent set In other words, the matrix V := v 1 v 2 v n C n n is invertible Proof: We ll prove this for 3 3 matrices check your linear algebra book for the generalization, which is basically the same proof Suppose that there are scalars α 1, α 2, α 3, such that This means that 3 α i v i = Θ 3 i=1 Θ 3 = (A λ 3 I) Θ 3 = (A λ 3 I) 3 i=1 α i v i (85) = α 1 (A λ 3 I) v 1 + α 2 (A λ 3 I) v 2 + α 3 (A λ 3 I) v 3 = α 1 (λ 1 λ 3 ) v 1 + α 2 (λ 2 λ 3 ) v 2 + Θ 3 Now multiply by (A λ 2 I), giving Θ 3 = (A λ 2 I) Θ 3 = (A λ 2 I) α 1 (λ 1 λ 3 )v 1 + α 2 (λ 2 λ 3 )v 2 = α 1 (λ 1 λ 3 )(λ 1 λ 2 )v 1 Since λ 1 λ 3, λ 1 λ 2, v 1 Θ 3, it must be that α 1 = Using equation (85), and the fact that λ 2 λ 3, v 2 Θ 3 we get that α 2 = Finally, α 1 v 1 + α 2 v 2 + α 3 v 3 = Θ 3 (by assumption), and v 3 Θ 3, so it must be that α 3 =
13 ME 132, Spring 25, UC Berkeley, A Packard Diagonalization Procedure In this section, we summarize all of the previous ideas into a step-by-step diagonalization procedure for n n matrices 1 Calculate the characteristic polynomial of A, p A (s) := det (si A) 2 Find the n roots of the equation p A (s) =, and call the roots λ 1, λ 2,, λ n 3 For each i, find a nonzero vector t i C n such that (A λ i I) t i = Θ n 4 Form the matrix T := t 1 t 2 t n C n n (note that if all of the {λ i } n i=1 are distinct from one another, then T is guaranteed to be invertible) 5 Note that AT = T Λ, where Λ = λ 1 λ 2 λ n 6 If T is invertible, then T 1 AT = Λ Hence e λ 1t e At = T e Λt T 1 e λ 2t = T e λnt T 1 We will talk about the case of nondistinct eigenvalues later 2154 e At as t For the remainder of the section, assume A has distinct eigenvalues if all of the eigenvalues (which may be complex) of A satisfy Re (λ i ) < then e λ it as t, so all entries of e At decay to zero
14 ME 132, Spring 25, UC Berkeley, A Packard 2 If there is one (or more) eigenvalues of A with then Re (λ i ) e λ it bounded as Hence, some of the entries of e At either do not decay to zero, or, in fact, diverge to So, the eigenvalues are an indicator (the key indicator) of stability of the differential equation ẋ(t) = Ax(t) if all of the eigenvalues of A have negative real parts, then from any initial condition x, the solution x(t) = e At x decays to Θ n as t (all coordinates of x(t) decay to as t ) In this case, A is said to be a Hurwitz matrix if any of the eigenvalues of A have nonnegative real parts, then from some initial conditions x, the solution to ẋ(t) = Ax(t) does not decay to zero 2155 Complex Eigenvalues In many systems, the eigenvalues are complex, rather than real This seems unusual, since the system itself (ie, the physical meaning of the state and input variables, the coefficients in the state equations, etc) is very much Real The procedure outlined in section 2153 for matrix diagonalization is applicable to both real and complex eigenvalues, and if A is real, all intermediate complex terms will cancel, resulting in e At being purely real, as expected However, in the case of complex eigenvalues it may be more advantageous to use a different similarity transformation, which does not lead to a diagonalized matrix Instead, it leads to a real block-diagonal matrix, whose structure is easily interpreted Let us consider a second order system, with complex eigenvalues where A R n n, x R 2 and ẋ(t) = Ax(t) (86) p A (λ) = det(λi 2 A) = (λ σ) 2 + ω 2 (87)
15 ME 132, Spring 25, UC Berkeley, A Packard 21 The two eigenvalues of A are λ 1 = σ + jω and λ 2 = σ jω, while the two eigenvectors of A are given by (λ 1 I 2 A)v 1 = Θ 2 (λ 2 I 2 A)v 2 = Θ 2 (88) Notice that, since λ 2 is the complex conjugate of λ 1, v 2 is the complex conjugate vector of v 1, ie if v 1 = v r + jv i, then v 2 = v r jv i This last fact can be verified as follows Assume that λ 1 = σ + jω, v 1 = v r + jv i and insert these expressions in the first of Eqs (88) (σ + jω)i 2 A) v r + jv i = Θ 2, Separating into its real and imaginary parts we obtain σv r ωv i + j σv i + ωv r = Av r + jav i σv r ωv i = Av r (89) σv i + ωv r = Av i Notice that Eqs (89) hold if we replace ω by ω and v i by v i Thus, if λ 1 = σ + jω and v 1 = v r + jv i are respectively an eigenvalue and eigenvector of A, then λ 2 = σ jω and v 2 = v r jv i are also respectively an eigenvalue and eigenvector Eqs (89) can be rewritten in matrix form as follows A σ ω v r v i = vr v i ω σ (9) Thus, we can define the similarity transformation matrix T = v r v i R 2 2 (91) and the matrix σ ω J c = ω σ (92) such that A = T J c T 1, e At = T e Jct T 1 (93) The matrix exponential e Jct is easy to calculate Notice that σ ω J c = = σi ω σ 2 + S 2 where I 2 = 1 1 and S 2 = ω ω
16 ME 132, Spring 25, UC Berkeley, A Packard 22 is skew-symmetric, ie S T 2 = S 2 Thus, cos(ωt) sin(ωt) e S2t = sin(ωt) cos(ωt) This last result can be verified by differentiating with respect to time both sides of the equation: d dt es 2t = S 2 e S 2t and d dt cos(ωt) sin(ωt) sin(ωt) cos(ωt) = = ω sin(ωt) ω cos(ωt) ω cos(ωt) ω sin(ωt) ω cos(ωt) sin(ωt) ω sin(ωt) cos(ωt) = S 2 e S 2t Since σi 2 S 2 = S 2 σi 2, then cos(ωt) sin(ωt) e Jct = e σt sin(ωt) cos(ωt) (94) 2156 Examples Consider the system ẋ(t) = Ax(t) (95) where x R 2 and A = The two eigenvalues of A are λ 1 = j and λ 2 = j Their corresponding eigenvectors are respectively j j v 1 = and v + 771j 2 = 771j Defining we obtain A T 2 = T 2 J c, where T = J c = 1 1 (96)
17 ME 132, Spring 25, UC Berkeley, A Packard 23 Thus, cos(t) sin(t) e Jct = sin(t) cos(t) and e At = T e Jct T 1 = cos(t) sin(t) sin(t) cos(t) Utilizing the coordinate transformation x = T 1 x x = T x, (97) where T is given by Eq (96), we can obtain from Eqs (97) and(95) ẋ (t) = { T 1 AT } x (t) (98) = J c x (t) Consider now the initial condition 4243 x() = = T x (), x () = 1 The phase plot for x (t) is given by Fig 8-(a), while that for x(t) is given by Fig 8-(b) x*2 x x*1 (a) ẋ = J c x x1 (b) ẋ = Ax Figure 8: Phase plots
18 ME 132, Spring 25, UC Berkeley, A Packard Jordan Form 2161 Motivation In the last section, we saw that if A F n n has distinct eigenvalues, then there exists an invertible matrix T C n n such that T 1 AT is diagonal In this sense, we say that A is diagonalizable by similarity transformation (the matrix T 1 AT is called a similarity transformation of A) Even some matrices that have repeated eigenvalues can be diagonalized For instance, take A = I 2 Both of the eigenvalues of A are at λ = 1, yet A is diagonal (in fact, any invertible T makes T 1 AT diagonal) On the other hand, there are other matrices that cannot be diagonalized Take, for instance, A = 1 The characteristic equation of A is p A (λ) = λ 2 Clearly, both of the eigenvalues of A are equal to Hence, if A can be diagonalized, the resulting diagonal matrix would be the matrix (recall that if A can be diagonalized, then the diagonal elements are necessarily roots of the equation p A (λ) = ) Hence, there would be an invertible matrix T such that T 1 AT = 2 2 Rearranging gives 1 t11 t 12 t 21 t 22 = t11 t 12 t 21 t 22 Multiplying out both sides gives t21 t 22 = But if t 21 = t 22 =, then T is not invertible Hence, this 2 2 matrix A cannot be diagonalized with a similarity transformation 2162 Details It turns out that every matrix can be almost diagonalized using similarity transformations Although this result is not difficult to prove, it will take us too far from the main flow of the course The proof can be found in any decent linear algebra book Here we simply outline the facts
19 ME 132, Spring 25, UC Berkeley, A Packard 25 Definition: If J C m m appears as λ 1 λ 1 J = λ 1 λ then J is called a Jordan block of dimension m, with associated eigenvalue λ Theorem: (Jordan Canonical Form) For every A F n n, there exists an invertible matrix T C n n such that J 1 T 1 J 2 AT = J k where each J i is a Jordan block If J F m m is a Jordan block, then using the power series definition, it is straightforward to show that e λt te λt t m 1 (m 1)! eλt e Jt = e λt t m 2 (m 2)! eλt e λt Note that if Re (λ) <, then every element of e Jt decays to zero, and if if Re (λ), then some elements of e Jt diverge to 217 Significance In general, there is no numerically reliable way to compute the Jordan form of a matrix It is a conceptual tool, which, among other things, shows us the extent to which e At contains terms other than exponentials Notice that if the real part of λ is negative, then for any finite integer m, { t m e λt} = lim t From this result, we see that even in the case of Jordan blocks, the signs of the real parts of the eigenvalues of A determine the stability of the linear differential equation ẋ(t) = Ax(t)
20 ME 132, Spring 25, UC Berkeley, A Packard Problems 1 Find, by hand calculation the eigenvalues, the eigenvectors and e At for the matrices 5 4 (a) A = (b) A = (c) A = (d) A = Find (by hand calculation) the eigenvalues and eigenvectors of the matrix A = Does e At have all of its terms decaying to zero? 3 Read about the MatLab command eig (use the MatLab manual, the Matlab primer, and/or the comnand >> help eig) Repeat problem 2 using eig, and explain any differences that you get If a matrix has distinct eigenvalues, in what sense are the eigenvectors not unique? 4 Consider the differential equation ẋ(t) = Ax(t), where x(t) is (3 1) and A is from problem (2) above Suppose that the initial condition is x() = Write the initial condition as a linear combination of the eigenvectors, and find the solution x(t) to the differential equation, written as a time-dependent, linear combination of the eigenvectors 5 Suppose that we have a 2nd order system (x(t) R 2 ) governed by the differential equation 2 ẋ(t) = x(t) 2 5 Let A denote the 2 2 matrix above (a) Find the eigenvalues an eigenvectors of A In this problem, the eigenvectors can be chosen to have all integer entries (recall that eigenvectors can be scaled) 1 1
21 ME 132, Spring 25, UC Berkeley, A Packard 27 (b) On the grid below (x 1 /x 2 space), draw the eigenvectors x 2 x 1 (c) A plot of x 2 (t) vs x 1 (t) is called a phase-plane plot The variable t is not explicitly plotted on an axis, rather it is the parameter of the tick marks along the plot On the grid above, using hand calculations, draw the solution to the equation ẋ(t) = Ax(t) for the initial conditions x() = 3 3, x() = 2 2, x() = 3 3, x() = HINT: Remember that if v i are eigenvectors, and λ i are eigenvalues of A, then the solution to ẋ(t) = Ax(t) from the initial condition x() = 2 i=1 α i v i is simply n x(t) = α i e λit v i i=1 (d) Use Matlab to create a similar picture with many (say 2) different initial conditions spread out in the x 1, x 2 plane 6 Suppose A is a real, n n matrix, and λ is an eigenvalue of A, and λ is not real, but λ is complex Suppose v C n is an eigenvector of A associated with this eigenvalue λ Use the notation λ = λ R + jλ I and v = v r + jv I for the real and imaginary parts of λ, and v ( j means 1) (a) By equating the real and imaginary parts of the equation Av = λv, find two equations that relate the various real and imaginary parts of λ and v (b) Show that λ (complex conjugate of λ) is also an eigenvalue of A What is the associated eigenvector? 4 2
22 ME 132, Spring 25, UC Berkeley, A Packard 28 (c) Consider the differential equation ẋ = Ax for the A described above, with the eigenvalue λ and eigenvector v Show that the function x(t) = e λ Rt cos (λ I t) v R sin (λ I t) v I satisfies the differential equation What is x() in this case? (d) Fill in this sentence: If A has complex eigenvalues, then if x(t) starts on the part of the eigenvector, the solution x(t) oscillates between the and parts of the eigenvector, with frequency associated with the part of the eigenvalue During the motion, the solution also increases/decreases exponentially, based on the part of the eigenvalue (e) Consider the matrix A It is possible to show that A j 1 2 j 1 2 = A = j 1 2 j j 1 2j Sketch the trajectory of the solution x(t) in R 2 to the differential equation ẋ = Ax 1 for the initial condition x() = (f) Find e At for the A given above NOTE: e At is real whenever A is real See the notes for tricks in making this easy 7 Consider the 2-state system governed by the equation ẋ(t) = Ax(t) Shown below are the phase-plane plots (x 1 (t)vsx 2 (t)) for 4 different cases Match the plots with the A matrices, and correctly draw in arrows indicating the evolution in time Put your answers on the enlarged graphs included in the solution packet A 1 = 3 2 1, A 2 = , A 3 = 3 1, A 4 =
23 ME 132, Spring 25, UC Berkeley, A Packard x2 x x x1 5 x2 x x x1
19 Jacobian Linearizations, equilibrium points
169 19 Jacobian Linearizations, equilibrium points In modeling systems, we see that nearly all systems are nonlinear, in that the differential equations governing the evolution of the system s variables
More informationOften, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form
ME 32, Spring 25, UC Berkeley, A. Packard 55 7 Review of SLODEs Throughout this section, if y denotes a function (of time, say), then y [k or y (k) denotes the k th derivative of the function y, y [k =
More informationSystems of Linear ODEs
P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here
More informationSYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation:
SYSTEMTEORI - ÖVNING 1 GIANANTONIO BORTOLIN AND RYOZO NAGAMUNE In this exercise, we will learn how to solve the following linear differential equation: 01 ẋt Atxt, xt 0 x 0, xt R n, At R n n The equation
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationLinearization of Differential Equation Models
Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationπ 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has
Eigen Methods Math 246, Spring 2009, Professor David Levermore Eigenpairs Let A be a real n n matrix A number λ possibly complex is an eigenvalue of A if there exists a nonzero vector v possibly complex
More informationDynamic interpretation of eigenvectors
EE263 Autumn 2015 S. Boyd and S. Lall Dynamic interpretation of eigenvectors invariant sets complex eigenvectors & invariant planes left eigenvectors modal form discrete-time stability 1 Dynamic interpretation
More information7 Planar systems of linear ODE
7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution
More informationEigenvalues, Eigenvectors and the Jordan Form
EE/ME 701: Advanced Linear Systems Eigenvalues, Eigenvectors and the Jordan Form Contents 1 Introduction 3 1.1 Review of basic facts about eigenvectors and eigenvalues..... 3 1.1.1 Looking at eigenvalues
More informationSection 8.2 : Homogeneous Linear Systems
Section 8.2 : Homogeneous Linear Systems Review: Eigenvalues and Eigenvectors Let A be an n n matrix with constant real components a ij. An eigenvector of A is a nonzero n 1 column vector v such that Av
More informationThe goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T
1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationLinear Differential Equations. Problems
Chapter 1 Linear Differential Equations. Problems 1.1 Introduction 1.1.1 Show that the function ϕ : R R, given by the expression ϕ(t) = 2e 3t for all t R, is a solution of the Initial Value Problem x =
More informationEE263: Introduction to Linear Dynamical Systems Review Session 5
EE263: Introduction to Linear Dynamical Systems Review Session 5 Outline eigenvalues and eigenvectors diagonalization matrix exponential EE263 RS5 1 Eigenvalues and eigenvectors we say that λ C is an eigenvalue
More informationSolution via Laplace transform and matrix exponential
EE263 Autumn 2015 S. Boyd and S. Lall Solution via Laplace transform and matrix exponential Laplace transform solving ẋ = Ax via Laplace transform state transition matrix matrix exponential qualitative
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationDesigning Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21
EECS 6A Designing Information Devices and Systems I Spring 26 Official Lecture Notes Note 2 Introduction In this lecture note, we will introduce the last topics of this semester, change of basis and diagonalization.
More information1. In this problem, if the statement is always true, circle T; otherwise, circle F.
Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation
More informationOctober 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS
October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential
More informationExamples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling
1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples
More informationNOTES ON LINEAR ODES
NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary
More informationHomogeneous Constant Matrix Systems, Part II
4 Homogeneous Constant Matrix Systems, Part II Let us now expand our discussions begun in the previous chapter, and consider homogeneous constant matrix systems whose matrices either have complex eigenvalues
More information1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is
ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important
More informationUnderstand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.
Review Outline To review for the final, look over the following outline and look at problems from the book and on the old exam s and exam reviews to find problems about each of the following topics.. Basics
More informationSeptember 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS
September 26, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationSolutions to Math 53 Math 53 Practice Final
Solutions to Math 5 Math 5 Practice Final 20 points Consider the initial value problem y t 4yt = te t with y 0 = and y0 = 0 a 8 points Find the Laplace transform of the solution of this IVP b 8 points
More information20D - Homework Assignment 5
Brian Bowers TA for Hui Sun MATH D Homework Assignment 5 November 8, 3 D - Homework Assignment 5 First, I present the list of all matrix row operations. We use combinations of these steps to row reduce
More informationIntroduction to Modern Control MT 2016
CDT Autonomous and Intelligent Machines & Systems Introduction to Modern Control MT 2016 Alessandro Abate Lecture 2 First-order ordinary differential equations (ODE) Solution of a linear ODE Hints to nonlinear
More informationMath 240 Calculus III
Generalized Calculus III Summer 2015, Session II Thursday, July 23, 2015 Agenda 1. 2. 3. 4. Motivation Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make
More informationControl Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli
Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)
More informationMATH 320, WEEK 7: Matrices, Matrix Operations
MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationEigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore
Eigenpairs and Diagonalizability Math 40, Spring 200, Professor David Levermore Eigenpairs Let A be an n n matrix A number λ possibly complex even when A is real is an eigenvalue of A if there exists a
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Eigenvalues and Eigenvectors Fall 2015 1 / 14 Introduction We define eigenvalues and eigenvectors. We discuss how to
More informationMATLAB Project 2: MATH240, Spring 2013
1. Method MATLAB Project 2: MATH240, Spring 2013 This page is more information which can be helpful for your MATLAB work, including some new commands. You re responsible for knowing what s been done before.
More informationLecture Notes for Math 524
Lecture Notes for Math 524 Dr Michael Y Li October 19, 2009 These notes are based on the lecture notes of Professor James S Muldowney, the books of Hale, Copple, Coddington and Levinson, and Perko They
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationDifferential equations
Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system
More informationMAT1302F Mathematical Methods II Lecture 19
MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly
More informationModule 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control
Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/
More information18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points.
8.06 Problem Set 7 Solution Due Wednesday, 5 April 2009 at 4 pm in 2-06. Total: 50 points. ( ) 2 Problem : Diagonalize A = and compute SΛ 2 k S to prove this formula for A k : A k = ( ) 3 k + 3 k 2 3 k
More informationComputationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:
Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication
More informationLinear Algebra and ODEs review
Linear Algebra and ODEs review Ania A Baetica September 9, 015 1 Linear Algebra 11 Eigenvalues and eigenvectors Consider the square matrix A R n n (v, λ are an (eigenvector, eigenvalue pair of matrix A
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful
More informationChapter III. Stability of Linear Systems
1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,
More informationECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77
1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationPeriodic Linear Systems
Periodic Linear Systems Lecture 17 Math 634 10/8/99 We now consider ẋ = A(t)x (1) when A is a continuous periodic n n matrix function of t; i.e., when there is a constant T>0 such that A(t + T )=A(t) for
More informationWhat is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =
STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row
More informationDiagonalization of Matrices
LECTURE 4 Diagonalization of Matrices Recall that a diagonal matrix is a square n n matrix with non-zero entries only along the diagonal from the upper left to the lower right (the main diagonal) Diagonal
More informationModal Decomposition and the Time-Domain Response of Linear Systems 1
MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING.151 Advanced System Dynamics and Control Modal Decomposition and the Time-Domain Response of Linear Systems 1 In a previous handout
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationWe have already seen that the main problem of linear algebra is solving systems of linear equations.
Notes on Eigenvalues and Eigenvectors by Arunas Rudvalis Definition 1: Given a linear transformation T : R n R n a non-zero vector v in R n is called an eigenvector of T if Tv = λv for some real number
More informationGeneralized Eigenvectors and Jordan Form
Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least
More informationControl Systems. Dynamic response in the time domain. L. Lanari
Control Systems Dynamic response in the time domain L. Lanari outline A diagonalizable - real eigenvalues (aperiodic natural modes) - complex conjugate eigenvalues (pseudoperiodic natural modes) - phase
More informationTopic # /31 Feedback Control Systems
Topic #7 16.30/31 Feedback Control Systems State-Space Systems What are the basic properties of a state-space model, and how do we analyze these? Time Domain Interpretations System Modes Fall 2010 16.30/31
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationLinear Algebra Exercises
9. 8.03 Linear Algebra Exercises 9A. Matrix Multiplication, Rank, Echelon Form 9A-. Which of the following matrices is in row-echelon form? 2 0 0 5 0 (i) (ii) (iii) (iv) 0 0 0 (v) [ 0 ] 0 0 0 0 0 0 0 9A-2.
More informationLS.2 Homogeneous Linear Systems with Constant Coefficients
LS2 Homogeneous Linear Systems with Constant Coefficients Using matrices to solve linear systems The naive way to solve a linear system of ODE s with constant coefficients is by eliminating variables,
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationCayley-Hamilton Theorem
Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A
More informationAdvanced Control Theory
State Space Solution and Realization chibum@seoultech.ac.kr Outline State space solution 2 Solution of state-space equations x t = Ax t + Bu t First, recall results for scalar equation: x t = a x t + b
More informationEigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization
Eigenvalues for Triangular Matrices ENGI 78: Linear Algebra Review Finding Eigenvalues and Diagonalization Adapted from Notes Developed by Martin Scharlemann The eigenvalues for a triangular matrix are
More information88 CHAPTER 3. SYMMETRIES
88 CHAPTER 3 SYMMETRIES 31 Linear Algebra Start with a field F (this will be the field of scalars) Definition: A vector space over F is a set V with a vector addition and scalar multiplication ( scalars
More informationDIAGONALIZABLE LINEAR SYSTEMS AND STABILITY. 1. Algebraic facts. We first recall two descriptions of matrix multiplication.
DIAGONALIZABLE LINEAR SYSTEMS AND STABILITY. 1. Algebraic facts. We first recall two descriptions of matrix multiplication. Let A be n n, P be n r, given by its columns: P = [v 1 v 2... v r ], where the
More informationCopyright (c) 2006 Warren Weckesser
2.2. PLANAR LINEAR SYSTEMS 3 2.2. Planar Linear Systems We consider the linear system of two first order differential equations or equivalently, = ax + by (2.7) dy = cx + dy [ d x x = A x, where x =, and
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 4 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University April 15, 2018 Linear Algebra (MTH 464)
More informationLinear Systems. Chapter Basic Definitions
Chapter 5 Linear Systems Few physical elements display truly linear characteristics. For example the relation between force on a spring and displacement of the spring is always nonlinear to some degree.
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)
More informationEigenvalues and Eigenvectors
Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying
More informationMATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by
MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar
More informationLecture 6. Eigen-analysis
Lecture 6 Eigen-analysis University of British Columbia, Vancouver Yue-Xian Li March 7 6 Definition of eigenvectors and eigenvalues Def: Any n n matrix A defines a LT, A : R n R n A vector v and a scalar
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationPutzer s Algorithm. Norman Lebovitz. September 8, 2016
Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,
More informationMa 227 Review for Systems of DEs
Ma 7 Review for Systems of DEs Matrices Basic Properties Addition and subtraction: Let A a ij mn and B b ij mn.then A B a ij b ij mn 3 A 6 B 6 4 7 6 A B 6 4 3 7 6 6 7 3 Scaler Multiplication: Let k be
More informationMatrix Solutions to Linear Systems of ODEs
Matrix Solutions to Linear Systems of ODEs James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 3, 216 Outline 1 Symmetric Systems of
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationTheory of Linear Systems Exercises. Luigi Palopoli and Daniele Fontanelli
Theory of Linear Systems Exercises Luigi Palopoli and Daniele Fontanelli Dipartimento di Ingegneria e Scienza dell Informazione Università di Trento Contents Chapter. Exercises on the Laplace Transform
More informationA = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,
65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the
More informationEigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization
Eigenvalues for Triangular Matrices ENGI 78: Linear Algebra Review Finding Eigenvalues and Diagonalization Adapted from Notes Developed by Martin Scharlemann June 7, 04 The eigenvalues for a triangular
More informationMATH 2921 VECTOR CALCULUS AND DIFFERENTIAL EQUATIONS LINEAR SYSTEMS NOTES
MATH 9 VECTOR CALCULUS AND DIFFERENTIAL EQUATIONS LINEAR SYSTEMS NOTES R MARANGELL Contents. Higher Order ODEs and First Order Systems: One and the Same. Some First Examples 5 3. Matrix ODEs 6 4. Diagonalization
More informationAutonomous system = system without inputs
Autonomous system = system without inputs State space representation B(A,C) = {y there is x, such that σx = Ax, y = Cx } x is the state, n := dim(x) is the state dimension, y is the output Polynomial representation
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13-14, Tuesday 11 th November 2014 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Eigenvectors
More informationSPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS
SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly
More informationEE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS
EE0 Linear Algebra: Tutorial 6, July-Dec 07-8 Covers sec 4.,.,. of GS. State True or False with proper explanation: (a) All vectors are eigenvectors of the Identity matrix. (b) Any matrix can be diagonalized.
More informationIdentification Methods for Structural Systems
Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More information