The motivating principal of linear systems is that even if a system isn t linear, it is at least locally linear.
|
|
- Dulcie Lester
- 6 years ago
- Views:
Transcription
1 Linear Systems Notes 1 1 Lecture 1 Introduction Consider the following linear system ẋ = Ax + Bx (1) where x R n, which describes the dynamics of x, given by two components: Ax the position of x (A L(R n, R n ), a linear mapping), and Bu, where u R m is called the control (if u = 0, the system has no control and its dynamics are determined only by the state) and B L(R m, R n ). There s another element of the system: y = Cx + Du (2) where y represents some measurable state/output (in real systems this would correspond to components with sensors on them). Think of these two equations as a the inner workings of a black box into which you put a control u and get output y. The motivating principal of linear systems is that even if a system isn t linear, it is at least locally linear. Consider a pendulum, with rod length l, mass m, reference angle θ which lives in the unit circle, hence not linear, and finally some actuator which can be controlled (u). If θ = 0 corresponds to the direction of the acceleration of gravity, the mechanics which govern this system are given by ml θ + mg sin(θ) = u (3) Recall that for higher order systems, we introduce a new variable, e.g. θ = v and v = g l sin(θ) + u ml. Note that even though the location coordinates are nonlinear, velocity v is always tangent and therefore linear. The picture is as follows: on the one hand we have a circle and another a line (so composed a cylinder). Let s unwrap the cylinder so that we have a plane in R 2 with θ as (l θ) 2 the x-axis and v as the y axis. Conservation of energy implies that 2 + gl cos(θ) = a constant, i.e. v 2 /2 + g/l cos(θ) = c. These level sets describe the dynamics for various conditions. E.g. if the pendulum starts vertical then the pendulum will approach the vertical position again at, see figure. Two equilibrium: θ = 0, v = 0. Given a small perturbation, θ = 0 + ɛx 1 and v = 0 + ɛx 2, after substitution we get ɛẋ 1 = ɛx 2 (4) ɛẋ 2 = g/l sin(ɛx 1 ) = g/l(ɛx 1 ɛ3 x which can be simplified by killing higher order terms as +...) (5) ẋ 1 = x 2 (6) ẋ 2 = g/lx 1 (7)
2 Linear Systems Notes 2 whose level sets look like elipsoids instead, see figure 2. This linearized system simplifies the mathematics, but in so doing loses accurate representation of the system. Nevertheless, locally (near equilibrium) it satisfactorily describes how the system works. Again, given θ = v (8) v = g/l sin(θ) (9) with equilibrium points θ = π + ɛx 1 and v = 0 + ɛx 2, we get by substitution If ɛẋ 1 = ɛx 2 (10) ɛx 2 = g/l sin(π + ɛx 1 ) = g/l(ɛx 1 + O(3)) (11) ẋ 1 = x 2 (12) ẋ 2 = g/lx 2 (13) see figure 3. Apparently, this point is unstable, but this linearization shows how the system behaves locally. This system is given by linear components (i.e. x 1 and x 2 ) and constant coefficients. Now consider θ = v (14) v = g/l sin(θ) (15) where θ = θ(t) and v = v(t); then we consider a slight perturbation, θ θ + ɛx 1 and v v + ɛx 2. Substituting results in θ + ɛẋ 1 = v + ɛx 2 (16) i.e. v + ɛẋ 2 = g/l sin(θ + ɛx 1 ) = g/l sin(θ) g/lɛx 1 cos(θ) + O(2) (17) ẋ 1 = x 2 (18) ẋ 2 = g/l cos(θ(t))x 1 (19) Linearization about an equilibrium point gives us a linear time invariant system, and linearization about general trajectory gives a linear time variant system. Linear Algebra Definition 1. A field (F, +, ) consists of a set F together with two binary operations, + : F F satisfying the following: 1. (F, +) is an abelian group. 2. (F \ 0, ) is an abelian group and interact by distribution: a (b + c) = a b + a c for all a, b, c F. ( ) a b Consider the set of matrices of the form = a b a commutative. ( ) b 0 1 ( ) 0 1 which is clearly 1 0
3 Linear Systems Notes 3 2 Lecture 4 Definition 2. A vector space (V, F ) is a set V over a field F such that (V, +) is an abelian group such that F acts on V as follows: 1. av V for all a F and v V. 2. (a + b)v = av + bv for all a, b F, v V. In particular, 0 F v = 0 V for all v V. 3. a(v + w) = av + aw for all a F, v, w V. 4. (ab)v = a(bv) for all a, b F, v V. The most important vector space is F un(a, F ), the set of functions f : A F. Define f +g in F un as (f + g)(a) = f(a) + g(a). The standard euclidean vector space is R n = F un({1,..., n}, R) defined by F (1,..., n) = (v 1,..., v n ).? (Can think of (0,..., 1,..., 0) as e.g. the function δ k ). If u R n, can write u = (a 1,..., a n ) = n a i δ i. Cannot always represent vectors in this i=1 fashion, e.g. if the space is infinite dimensional. Nevertheless, we can always consider the set of elements in a linear space spanned by only a finite number of vectors. This is also a vector space. (For sum of two sums is also finite.) Definition 3. Given a vector space V, and W V, then W is a subspace if W is also a vector space (on its own). Let V = R 1, then Z R is not a subspace. On the other hand, let B A and consider the set of all functions f on A for which B ker(f). Another example of linear subspace. Let V be a vector space, and consider the set of linear functions f : V F (i.e. where f(av + w) = af(v) + f(w)), called linear functionals (or covectors), when the range is contained in R; call this set V ; the dual space of V. For example, if f V = C([0, 1], R), then 1 0 f(x)dx is a linear functional. f(0) = fδ 0 dx, evaluation at zero, with dirac function. Then can consider the set of functions which evaluated at zero are zero. R n = F un({1,..., n}, R). If u R n = u k δ k. Often there are canonical bases given by {δ k }, but there are always different ones. Definition 4. Given a collection of vectors (e i ) n i V, we say that (e i ) are linearly independent if n a i e i = 0 a i = 0 F for i = 1,..., n. i=0 Fact: Consider V, and consider f 1,..., f n V ; consider the set {f k (e j ) : j, k = 1,..., n}. Then f 1 (e 1 ) f 1 (e n ) det (20) f n (e 1 ) f n (e n ) and hence f k are linearly independent.
4 Linear Systems Notes 4 Definition 5. A basis is a maximal linear independent set of vectors. Proposition 1. Every vector space has a basis. Proof. Finite case: keep adding vectors not in the span until you can t go anymore. Infinite case: Zorn s lemma. Given a basis, there is a unique representation of each vector by its basis elements. Proof. Assume a i v i = b i v i ; then (a i b i )v i = 0 a i = b i by linear independence of basis vectors. Given u = c k e k, can think of the coefficients as [linear] functions of u, i.e. u = c k (u)e k = ( e k c k (u))u, which means that the sum e k c k is the identity. Now suppose u = n a i e i in the basis (e i ), and in another basis (e i ), u = i=1 n b i e i; u is the same vector, but working with it depends on understanding how to navigate the bases. There is a standard procedure: write each e i in terms of e i, i.e. e i = n n c i e i. So u = a k e k ( ẽ j e j) = a k (ẽ j(e k )) e j = j j k k=1 (P jk a k )f j, where P jk = ẽ j (e k ), and ẽ is the covector of e. k Example Consider the set of functions on {0, 1,..., 5} represented by the basis of δ k functions. Another basis: f 0 = z 0 = (1, 1, 1, 1, 1, 1), f 1 = z 1 = (0, 1, 2, 3, 4, 5), and in like manner the set of monomials of degree less than or equal to 5. i=1 j=1
5 Linear Systems Notes 5 3 Lecture 3 Let V = span({e 1,..., e n }) where any v V can be uniquely represented as v = n c k e k, c k R. Given another basis {f 1,..., f n }, to write the same vector v in terms of f i, just represent each e k in terms of f i, i.e. e i = n n p k,i f k, so v = c i p k,i f k = ( p k,i f k ). k i=1 k k i Recall Definition 6. The dual space V = L(V, F), where V is a linear space. Remarks: recall that f(av + w) = af(v) + f(w). Field action on the space is explicitly connected to the space you re in! So e.g. a W f(v) whereas in the domain a V v. The space of linear operators is itself a linear (vector space) defined for f, g : V W by (f + g)(a) = f(a) + g(a) where the latter sum is taken in W and the former in L(V, W ). Definition 7. Given a linear operator A L(V, W ), Null(A) = Ker(A) = {v V : Av = 0 W } and Ran(A) = Im(A) = {w W : Av = w some v V }. Example Consider the set of linear functions on F, L(F, F), with a L defined by a : v a v. Proposition 2. dim(n(a)) + dim(r(a)) = dim(v ). Consider the space V = F un(s, C) where S = {0, 1,..., 5}. Then A(f(0),..., f(5)) = (0, f(1),..., 5 f(5)); i.e. A is multiplication by z. Then N(A) = span({δ 0 }) and R(A) = {δ 1,..., δ 5 }. (Must verify that these elements actually are in the range, but that s easy since Aδ k = δ k.) In the standard δ basis A can be represented as A =. 0.. (21) Definition 8. The collection of all subspaces of a given space is called the Grassmanian ring, and it is not linear, denoted G(n, k) with dimension (n k)k Consider a linear mapping A L(V, W ), where V has basis (e i ) i m and W has basis (f j ) j n, and A represented as an n m matrix, Ae l = A(k, l)f k. With new bases (e ) and (f ), find a new matrix for the same operator. Represent old basis in terms of new basis: e k = P k,i e i. As i it stands A : V W by (e) A(f). Want a mapping (e ) (e) (f) (f ). So just compose representations: A(e k ) = A( P k,i e i), and also f j = Q j,l f l. Messy to expand, but conceptually i l simple: just iterate the process for each change of basis/transformation One special case: if V = W and (e ) = (f ), then A : (e) (f) becomes à = P AP 1 where P is the change of basis matrix from e e. The mapping A à defines an equivalence relation. i
6 Linear Systems Notes 6 Proof. A A by P = I. If A B, then A = P BP 1 B = P 1 AP = P 1 A(P 1 ) 1. Finally, if A B and B C, then A = P BP 1 = P QCQ 1 P 1 = P QC(P Q) 1. Example Consider the following operator. Let n N. Let f : n R (represented by n people who have, say, some amount of stuff), and the operation which takes what person i has, splits it in two and gives it to i 1 and i + 1. Can be represented as the following matrix: 0 1/2 1/2 1/ (22). 1/2 0 0 Definition 9. Let A : V V ; v 0 is an eigenvector if A(v) = λv for some λ F. If the eigenvalues form a basis of the range space, then the operator A : V W can be represented diagonally. Similarly if A D D(n, F), then the eigenvectors of A form a basis. Over C there s always at least one eigenvalue (follows from FTA).
7 Linear Systems Notes 7 4 Lecture 4 Let V, W be vector spaces over some field F, and A : V W (from dimension n to m), where B V = {v 1,..., v n } is a basis for V and B W = {w 1,..., w m } is a basis for W. In order to find the image, need to find rank(a) = r; start by finding the kernel of A, ker(a) V, and a basis for it, namely some B ker(a) {v j1,..., v jr } B V, then to find the image, see where the remaining basis elements B v \ B ker(a) map to in W. Now consider A : V V. Can split the image space into the following: the set of things which map to zero (null space), the set of things which map to the image, then the remainder of the range space which isn t mapped to by elements in the domain. Suppose that each individual basis element is mapped to a scaled factor of another basis element, then the matrix representing this linear transformation is given by a diagonal matrix. Unfortunately, not every linear mapping has a diagonal representation. For example, consider V = {p F[x] : dim(p) k}, given by basis B V = {x j : j = 0,..., k}. Take the linear mapping D : V V given by action on the standard basis element Dx j = jx j 1 (where x 1 = 0 by convention). The matrix of this transformation is represented by D = (23) This operator is called nilpotent, because there is an n N (namely d = k) such that D n 0 on L(V, V ). e.g = 0 M 3 (R) (24) Obviously a nilpotent matrix has a nontrivial kernel. 4.1 Jordan Normal Form Let A : V V where V has dimension V. It is possible to find a canonical Jordan representation of A consisting of Jordan blocks, which is a direct sum of a identity operator and nilpotent part, i.e. A = λi k N (25) where k indicates the size of the Jordan block. For example λ 1 0 A = 0 λ 1 (26) 0 0 λ
8 Linear Systems Notes 8 For example, e λx p k (x) where p k (x) F[x] is a polynomial of degree k; this is called a quasipolynomial, which has the property that its closed under differentiation, i.e. the space V = {e λx p(x) : p(x) V }, (V from above), then Ee l = λe l + e l 1. Theorem 1. Let (V, C) be a vector space, and A L(V, V ); then there is a basis B w.r.t. which A has a matrix representation in Jordan Normal Form. Take away: all transformations look like generalized differentiation of quasi polynomials. 4.2 Other Canonical Forms Let A : V V. Definition 10. let A L(V, V ). A vector v V is called cyclic in A if the set {A k v : k = 0,...} spans of V. Note that if v is cyclic, then {v,..., A n 1 v} suffices to span the space. Let A L(V, V ), can write A mat M n (R), and consider the sequence (I = A 0, A, A 2,...); these matrices are linearly dependent, and moreover the relation can be easily described: Theorem 2 (Cayley-Hamilton). A n + k=1 n Aa k A n k = 0 where a k are coefficients of the characteristic polynomial of A, i.e. det(λi A) = λ n + a k λ n k. Proof. Easy if A is diagonalizable. Let char A (t) be the characteristic polynomial of A in the n indeterminate variable t. If A is diagonalizable, then char A (t) = (t λ k ) where λ k are the diagonal values of A (i.e. the eigenvalues of A); then char A (A), i.e. evaluation of char A (t) at A. k=1 4.3 Functions of Matrices n Given a polynomial p F[x], can evaluate at A, P (A) = a k A k. On the other hand, we can express functions as formal power series, namely f(t) = if it is convergent for all t. k=1 f k t k /k!, f k A k Definition 11. Can do so similarly for A, f(a) =. In particular if A D n (R), then k! f(a) = (f(a kk ) n k=1 ). k=1 k=1
9 Linear Systems Notes 9 Definition 12 (Matrix Exponential). ϕ(t) = e ta is well defined, as e ta = I + ta + t 2 A 2 / satisfying various properties: 1. e (t+s)a = e ta e sa 2. ϕ (t) = Aϕ(t). Suppose x 0 V, and we have a differential equation of the form ẋ = Ax, x(0) = x 0. Then the above tells us that if x(t) = ϕ(t)x 0, then ẋ = Aϕ(t)x 0 = Ax by definition.
10 Linear Systems Notes 10 5 Lecture 5 For ẋ = Ax where A M n (R) is called a linear time invariant system. The solution x(t) = ϕ(t)x 0, with ϕ(0) = x 0. Recall that ϕ = ɛ ta = I + ta + t2 A 2 2! +... Note that e ta does in fact converge. Take some matrix A, and consider A = max i,j a ij, then A 2 i,j na2, and similarly A k i,j nk 1 a k. So e ta i,j = δ i,j + ta i,j + t 2 A 2 i,j /2! tk (A k ) i,j /k! +... k! = 1 k k+1/2 2π e k (1 + o(1)). How to compute the matrix exponential? It s not good to use the power series expansion to compute the exponential. There are several better alternatives. 5.1 Computing the Matrix Exponential First method: diagonalization. Matrices behave well w.r.t. change of basis. Given à = P 1 AP, then e A = e P 1 AP = I + tp AP 1 + t 2 P A 2 P 1 /2! +... = P (1 + ta +...)P 1 = P e At P 1. This implies that if we can find a basis w.r.t. which matrix multiplication is easy to compute (e.g. diagonal matrices), then we should transform to that. Example Let A = ( A k fk 1 f = k f k f k+1 ( ) 0 1, A = ( ) 1 1, A = ), where f k is the k-th Fibonacci number. ( ) 1 2, A = ( ) 2 3. In general 3 5 ( ) Since σ(a ) = { 0.618, 1.618} are distinct, A is diagonalizable, with P =, ( ) ( ) e hence A = P A P 1 =, and e A 0.618t 0 t = 0 e 1.618t. Therefore, e A t = e d 1 0 P 1 e At P. (Note, we skipped steps: e D = because (D k ) i,j = (D i,j ) k.) 0 e dn Now consider Jordan matrices: J = λi + N where N is nilpotent. Then e tj = e tλi+tn = e tλi e N = (e tλ ) (I + tn + λn t k 1 N k 1 /(k 1)!), since N k = 0. Therefore, 1 t t k 1 /(k 1)! e tn 0 1 t k 2 /(k 2)! = (27) Another method: e ta = L 1 [(si A) 1 ]. This requires inverting the matrix (si A) and computing inverse Laplace component wise. e.g. ( ) 1 ( ) s 1 1 s 1 1 = 1 s 1 s 2 (28) s 1 1 s
11 Linear Systems Notes 11 This method works for small matrices, but it is computationally intensive. Finally, we can also use Cayley Hamilton: recall that for for any A M n (R), char A (A) 0 M n (R). Therefore A n can be expressed as a linear combination of matrices of lower power (similarly, A n+1 = A n A can be too, and so on). Then t j A j /j! = β k (t)a k. Though this k<n makes the sum of matrices finite, β k (t) will be a formal power series in t. j=0 5.2 Using Matrix Exponential in System Given ẋ = Ax + Bu and y = Cx + Du, a solution is given by x(t) = ϕ(x)x 0 + t 0 ϕ(t s)bu(s)ds; this is the solution for linear LTI systems; y(t) = Cϕ(t)x 0 + t 0 Cϕ(t s)du(s)ds + Du(t). ϕ should satisfy several properties: 1. ϕ(t s)ϕ(s r) = ϕ(t r). 2. ϕ(t t) = id. 3. ϕ(t s)ϕ(s t) = id.
12 Linear Systems Notes 12 6 Lecture 6 Now consider ẋ = A(t)x + B(t)u and y = C(t)x + D(t)u, the solution given by x(t) = ϕ(t s)x(s). The solution is found by integrating: x(t) = ϕ(t s)b(s)ds. Thus x(t + ) x(t) ϕ(t + t)ϕ(t + )x(s) ϕ(t s)x(s) ϕ(t + ) I A(t)x(t) = lim = lim = lim ϕ(t + t) ϕ(t) from which it follows that lim = A(t)ϕ(t s) = 0 ϕ(t s). t Question: how to solve ϕ t (t s) = A(t)ϕ(t s). If A(t) A is linear time invariant, then ϕ(t s) = e (t s)a. On the other hand, consider the ordinary differential equation, ẋ = a(t)x, the solution is given by x(t) = e a(s)ds x(0). But in higher dimension, this approach may not work. However, if [A(t), A(s)] = 0 for all t, s R (where [, ] denotes the standard Lie bracket in gl n (V )), then ϕ(t s) = e t s A(u)du (30) The way to verify this is to use power series expansion from the definition of the matrix exponential. Two operators commute iff they can be diagonalized simultaneously, i.e. we can represent the basis for each as a direct sum of one-dimensional spaces (in other words, if there is a diagonal basis for both of them). (29) x(t) 6.1 Uniqueness of Differential Equations Let x V = R n, and consider We want to consider conditions which guarantee uniqueness. ẋ = f(x, t) (31) Example ẋ = x, x = 0, and x = t 2 /4 are two different solutions to this system. But f x = 1 and explodes as x 0 which is why the solution is not unique. 2 x The statement: if f(x, t) is continuously differentiable in x, then the solution exists and is unique. Proof Idea. Proof uses Picard approximation. See [Arnold] for details. Work in steps: start with 0 order approximation and follow the trajectory traced out by velocities recorded along the guessed solution, to get the first approximation: x 1 (t) = x 0 + t 0 f(x 0(x), s)ds, then iterate this procedure to get the next approximated solution, x 2 (t) = x 0 + t 0 f(x 1(s), s)ds. Etc. For example, let ẋ = x with x(0) = 1 and let x 0 (t) = 1, then x 1 (t) = x 0 + t 0 x 0(s)ds = 1 t, then x 2 (t) = x 0 + t 0 (1 s)ds = 1 t + t2 /2. Then x 3 (t) = 1 + t 0 (1 s + s2 /2)ds = 1 t + t 2 /2 t 3 /6, continuing in this way it is clear that the solution is x (t) = ( 1) j t j /j!, and each finite sum is just the Taylor approximation. j=0
13 Linear Systems Notes 13 Now given ϕ(t) = A(t)ϕ(t), with ϕ(0) = Id. Using Picard approximation, ϕ 0 (t) = Id, ϕ 1 (t) = Id + t 0 A(s)ϕ 0(s)ds = Id + t 0 A(s)ds, ϕ 2(t) = Id + t 0 A(s)ϕ 1(s)ds = Id + t 0 A(s)(Id + s 0 A(u)du)ds = Id + t 0 A(s)ds + t s 0 0 A(s)A(u)duds. Again, ϕ 3(t) = Id + 0<s 1 <t A(s)ds + 0<s 1 <s 2 <t A(s 2)A(s 1 )ds 1 ds 2 + 0<s 1 <s 2 <s 3 <t A(s 3)A(s 2 )A(s 1 )ds 1 ds 2 ds 3. Continuing in this way to infinity, can prove (analogous to proof of convergence of matrix exponential) that this series will converge, will be well defined (i.e. unique), and will be the solution. Returning to controls, x(t) = ϕ(t 0)x(0) + t 0 ϕ(t s)b(s)u(s)ds and y = C(t)x(t) + D(t)u(t). Given a purported ϕ, what are the conditions which verify that it is the fundamental matrix? Det(ϕ) cannot be negative; any ϕ with positive determinant can be a fundamental ( ) matrix 2 0 of a linear time variant system ϕ(t) = A(t)ϕ(t). For LTI, for example if ϕ(1) =, then let 0 3 ( ) log(2) 0 A =, then e 0 log(3) A works. In this case it is not enough that the determinant is positive, ( ) 2 0 but if e.g. ϕ =, then we have a system which expands and rotates, which cannot be 0 3 done time independently. Let J = ( ) ( cos(t) sin(t) e tj = sin(t) cos(t), and compute e tj. It rotates by 90. Then J 2 = Id, i.e. J = Id. Then J on [0, π] ), the operator of rotation. Let A(t) = ẋ(t) = ϕ(t)x(0), ẋ = A(t)x, l x(t) = y(t), lϕ(t)x(0) ( log(2) 0 0 log(3) ) on[π, π + 1]
14 Linear Systems Notes 14 7 Lecture 7 LTV ẋ(t) = Ax(t) (32) where x V = R n, ϕ(t s) : V V, and x(s) = x s then the solution to the above two has x(t) = ϕ(t s)x(s), y(t) = l(t) x(t) = l(t)ϕ(t s)x(s) where l(s) = l(t)ϕ(t s); ϕ : V s V t, but the l Vt = L(V t, R), so ϕ : Vt Vs. ϕ = Id + s u 1 t A(u 1)du 1 + s u 1 u 2 t A(u 2)A(u 1 )du 1 du s u 1... u A(u n t n) A(u 1 )du 1... du n. b af (b, a) = a a f(x)dx... dl/ds = l(t)(i + A +...) = l(s)a(s). Example ẋ 1 = v, v = a and ȧ = 0. This system can be represented by ẋ = Ax where A = Then ϕ(t s) = e (t s)a ; suppose the initial conditions are x(0) = x 0, v(0) = v t s (t s) 2 /2 x 0 x 0 + (t s)v 0 + (t s) 2 /2a 0 and a(0) = a 0. Then x(t) = 0 1 t s v 0 = v 0 + (t s)a a 0 a 0 Say that y(t) = ( ) x 1 (t) x 2 (t) = x(s) + (t s)v(s) + (t s) 2 /2a(s). Then l(s) = (1, (t x 3 (t) s), (t s) 2 /2). Then d/ds(l(s)) = (0, 1, (t s)) = (0, l(1), l(2)) = ( ) l 1 l 2 l Stability Consider the system ẋ 1 = x 2, ẋ 2 = 2x 1 + x 2 + u, and y = x 2 2x 1. Applying Laplace, sx 1 = X 2, Y = X 2 2X 1, sx 2 = 2X 1 + X 2 + U, s 2 X 1 = 2X 1 + sx 1 + ( U, and ) solving we obtain that X 1 = U s 2 s 2, X 2 = su s 2 s 2, Y = (s 2)U 0 1 (s 2)(s+1). In matrix form, ẋ = x. 2 1 Definition 13 (Lyapunov Stability). x 0 is Lyapunov stable if for any ball B(x 0, δ) one can find ɛ > 0 such that x(0) x 0 < ɛ x(t) x 0 δ A more familiar way to say it is that the mapping from equilibrium point to trajectory is continuous (with norm on the on trajectory s sup). Definition 14. An equilibrium point x 0 is asymptotically stable if it is Lyapunov stable and there is an ɛ > 0 such that for every trajectory satisfying x 0 x(0) < ɛ, x(t) x 0 as t.
15 Linear Systems Notes 15 8 Lecture 8 Let ẋ = f(x) (33) and x 0 = 0 an equilibrium point. Then x 0 is Lyapunov stable if for every ɛ > 0 there is a δ > 0 such that x(0) x 0 < δ guarantees that x(t) x 0 < ɛ for all t R. x 0 is asymptotically stable if it is Lyapunov stable and for any x(0) x 0 < δ, x(t) x 0. Global asymptotic stability is asymptotic stability with no condition on initial condition. Exponential stability is stability with negative exponential bound, x(t) < e αt. 8.1 Stability for LTI Systems Theorem 3. The linear time invariant system ẋ = Ax (34) is Lyapunov stable iff for σ(a) = {λ C : Ax = λx}, Re(λ) 0 for every λ σ(a), and the Jordan cells of λ σ(a) with Re(λ) = 0 have size 1. Recall, for any operator A : R n R n, we can decompose A (i.e. represent it as a matrix over some basis) as J 1 0 A = (35) 0 J k where J j has the form λi nj + N nj where I nj is the identity matrix in M nj (R) and N nj is the canonical nilpotent matrix (N n j n j = 0) in M nj (R). The size of a Jordan block corresponds to a function of the multiplicity of the eigenvalue. Note: This does not mean that an eigenvalue of multiplicity 3 has a Jordan block of size 3, but that the Jordan form corresponding to this eigenvalue can have size 3, 2 and 1, or three size 1 Jordan blocks. ( ) λ 1 The reason for this is that if a Jordan block is nontrivial, e.g. J = will give 0 λ ẋ = ( ) (36) which has solution x(t) = e ta x(0) = 1 t 0 1 x(0) (37) Theorem 4. Linear time invariant systems are asymptotically stable iff they are globally explonentially stable.
16 Linear Systems Notes 16 Proof. ( ): Immediate from the definition. ( ): First of all, asymptotic stability implies global asymptotic stability for linear systems. Linearity guarantees that if something which starts close approaches the origin, then by (linear) scaling, everything else does too. (e.g. ˆX = cx, then d/dx ˆX = cd/dx(x) = cax = AcX = A ˆX). This all happens iff Re(σ(A)) < 0. (If Re(λ) < 0 for all λ σ(a), then A is defined to be Hurwitz.) 8.2 Lyapunov Theory Given ẋ = Ax for x V = R d, and let U : V R be a sensor (indicator) which is continuously differentiable, with x 0 an equilibrium point. Suppose the following are satisfied: 1. U(x 0 ) = 0 2. U > 0 (positive definite) 3. in some open neighborhood of x 0 du(x(t)) dt 0. If a function U satisfies these conditions, U is called a Lyapunov function. Theorem 5. If U is a Lyapunov function for ẋ = f(x) at x 0 Ω. Then x 0 is Lyapunov stable. Proof. Given some B(x 0, δ) Ω, take inf x Sδ U(X) which is certainly obtained since B is compact, call it U 0. Consider the boundary of the closed set where {U(x) = U 0 /2}; on that boundary, take the point closest to the origin, i.e. inf x {x:u(x)=u0 /2} x, and again this isn t the origin, let that radius be r. Within this ball, all values of U are smaller than U 0 /2. and ( ) ( ) Calculation: d(u(t)) dt = U = U x 1,..., U x n x (t) = U x 1,..., U x n f(x). Example Let ẋ 1 = x 2 + x 1 ( ɛ + x x 2 2) (38) ẋ 2 = x 1 + x 2 ( ɛ + x x 2 2) (39) Let U(x 1, x 2 ) = x 2 1 +x2 2. We verify that this function is indeed a Lyapunov function: first U(0, 0) = 0 and is clearly positive definite. Then U = ( ) 2x 1 2x 2, so d dt U(x(t)) = U f(x) = 2x 1 ( x 2 ɛx 1 + x 1 r 2 ) + 2x 2 (x 1 ɛx 2 + x 2 r 2 ) = 2ɛr 2 + r 4, where r 2 = x x2 2. So choose neighborhood inside ball with radius r 2 < ɛ, i.e. Ω = {x x2 2 < ɛ}.
17 Linear Systems Notes Relation between Linear and Nonlinear ẋ = f(x) (40) f(x 0 ) = 0, for x x 0, we say that f(x) = f(x 0 ) + Jf(x 0 )(x x 0 ) +... where the dots encapsulate nonlinear terms (Taylor). I.e. ẋ = Ax + g(x) (41) for small x, g(x) C x 2. If U is a good Lyapunov function for the linear part Ax (meaning that additionally U Ax is negative definite, and U is homogeneous); then U is also a Lyapunov function for ẋ = Ax + g(x). Definition 15. U is homogeneous of degree k if U(λx) = λ k U(x).
18 Linear Systems Notes 18 9 Lecture Lyapunov Method Given ẋ = f(x) with x V = R d, f(0), let U : V R be positive definite with U f = 0, in some open neighborhood Ω containing 0. ẋ = f(x) = Aẋ + g(x) with g(x) C x 2 in Ω, where g is homogeneous of degree k, i.e. U(λx) = λ k U(x). Examples 1. Consider a j x j, x ij x i x j or J =k U(x) is positive definite of degree 2k > 0 only. a j x i x n where i + n = k. Homogeneous polynomials 2. U (homogeneous of degree 2k > 0), is positive definite iff U S(0,1) > 0 (where S(0, 1) denotes the unit sphere), for take x = λs S(0, 1), with λ > 0. Then U(x) = λ 2k U(s). ( ) x 1 3. U A(x) = Q(X) = U x 1,..., U x D A. where each U x j is polynomial of degree x d 2k 1, so the result Q(x) is also a polynomial of degree 2k. This implies thast if Q S1 < 0 then U f(x) < 0 in some open neighborhood of 0. So for small perturbations, U(Ax + g(x)) < 0 still holds, because (U(Ax + g(x))) = Q(x) + U g(x), with the first term hom. of degree 2k, and U g(x) U g(x). Recall that U x j C j x 2k 1, which implies that the total norm of U will be bounded by e.g. d max j {C 1,..., C d } x 2k 1, and therefore, U g(x) B x 2k+1 (since g(x) C x 2 ). Then Q(x) a (a > 0), for x = 1, then Q(x) a x 2k by homogeneity: x = x x x, so Q(x) = x 2k Q( x x ). Hence Q(x) + U(x) g(x) a x 2k+1 + B x 2k+1, and for x a a 2B, we have U f(x) 2 x 2k, which is negative definite, and so satisfies the conditions for Lyapunov stability. The point of this last example is that stability of nonlinear systems can be determined, at least locally, by inspection on the stability of the linear part. 9.2 Quadratic Forms Let V = R d, and Q : V R; we define Q(x, y) = 1 2 (Q(x + y) Q(x) Q(y)). Q is quadratic if Q is bilinear, i.e. Q(ax + x, by + y ) = abq(x, y) + aq(x, y ) + bq(x, y) + Q(x, y ). More concretely, fix a basis B. Then the quadratic functions are those with the form Q(x) = a i,j x i x j. They are important because Lyapunov functions for linear time invariant systems 1 i,j d will generally have a quadratic form. The matrix representing this will be symmetric: x T Ax = Q(x).
19 Linear Systems Notes 19 There s a natural quadratic form: C(x) = x 2 j = xt x = x. Note: these computations depend on the system of coordinates (choice of basis). So far we have two quadratic forms: one defined by A and the unit matrix which gives the norm of a vector. Theorem 6. For any any quadratic form Q : V R, there exists P Gl n (R) such that Q(P (Y )) = ±y 2 i. Thus, the quadratic form is characterized by (n +, n, n 0 ), the number of +yi 2, y2 i, and 0, respectively. For example, could have positive definite (2, 0, 0), negative definite (0, 2, 0), positive semidefinite (1, 0, 1), nothing (0, 0, 2). Example Let Q(x) = 9x + 1 x2 2, which is not in canonical form. Therefore, define x 1 = 3x 1 and x 2 = x 2. Then Q(x) = x 2. (Here we are stretching the space in one dimension). 1 + x 2 2 If however, we want to preserve the metric, then it won t be as easy to transform to the canonical form. Theorem 7. If P is distance preserving, i.e. x = P x for all x V, or also P T P = Id V, then any quadratic form Q can be transformed by an orthonormal matrix P to Q(P y) = λ j y 2. Theorem 8. With conditions as in the previous theorem, each λ j R. Proof. Ax = λx, and take for x = u j + v j i (i = 1), x Ax = a kj (u k v k i)(u j + v j i) + a j,i (u j v j i)(u k + v k i) from which after staring at this equation for a while, it becomes clear that all imaginary parts disappear. Furthermore, x x = u 2 j + v2 j, so λx x is real. Here Qx = λcx. Proposition 3 (Rayleigh s Min-Max Characterization of λ i ). In order to characterize the largest λ, take the unit sphere; it will be the one of steepest ascent. In other words, λ max = max x =1 Q(x), and λ min = min l max x <1 Q(x). Order λ 1... λ d. Then λ k = min linearsubspacesofdimensionk max x =1,x L k Q(x).
Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).
Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D
More informationj=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.
LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More information1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is
ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationSolution of Linear State-space Systems
Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state
More informationTopics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationModule 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control
Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/
More informationLinear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems
Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form
More informationAutonomous system = system without inputs
Autonomous system = system without inputs State space representation B(A,C) = {y there is x, such that σx = Ax, y = Cx } x is the state, n := dim(x) is the state dimension, y is the output Polynomial representation
More information21 Linear State-Space Representations
ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may
More informationThe goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T
1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationLINEAR ALGEBRA MICHAEL PENKAVA
LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)
More informationELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky
ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky Equilibrium points and linearization Eigenvalue decomposition and modal form State transition matrix and matrix exponential Stability ELEC 3035 (Part
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationNotes on the matrix exponential
Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not
More information6 Inner Product Spaces
Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationLinear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.
Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear
More informationREPRESENTATION THEORY WEEK 7
REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable
More informationRepresentation Theory
Representation Theory Representations Let G be a group and V a vector space over a field k. A representation of G on V is a group homomorphism ρ : G Aut(V ). The degree (or dimension) of ρ is just dim
More informationExamples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling
1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples
More information= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):
Chapter 2 Linear autonomous ODEs 2 Linearity Linear ODEs form an important class of ODEs They are characterized by the fact that the vector field f : R m R p R R m is linear at constant value of the parameters
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More information1. Find the solution of the following uncontrolled linear system. 2 α 1 1
Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +
More information1 Lyapunov theory of stability
M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability
More information7 Planar systems of linear ODE
7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationAn introduction to Birkhoff normal form
An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an
More informationTOPOLOGICAL EQUIVALENCE OF LINEAR ORDINARY DIFFERENTIAL EQUATIONS
TOPOLOGICAL EQUIVALENCE OF LINEAR ORDINARY DIFFERENTIAL EQUATIONS ALEX HUMMELS Abstract. This paper proves a theorem that gives conditions for the topological equivalence of linear ordinary differential
More informationBalanced Truncation 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI
More informationLecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.
Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ
More information0.1 Rational Canonical Forms
We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More informationECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77
1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms
I/O Stability Preliminaries: Vector and function norms 1. Sup norms are used for vectors for simplicity: x = max i x i. Other norms are also okay 2. Induced matrix norms: let A R n n, (i stands for induced)
More informationTHE MINIMAL POLYNOMIAL AND SOME APPLICATIONS
THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More informationMath 110 Linear Algebra Midterm 2 Review October 28, 2017
Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections
More informationLinear System Theory
Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationWeek Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,
Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationMath 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.
Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses
More informationPutzer s Algorithm. Norman Lebovitz. September 8, 2016
Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationEigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.
Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss
More informationSPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS
SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly
More informationAlgebra I Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary
More informationJordan Normal Form. Chapter Minimal Polynomials
Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q
More informationJORDAN NORMAL FORM NOTES
18.700 JORDAN NORMAL FORM NOTES These are some supplementary notes on how to find the Jordan normal form of a small matrix. First we recall some of the facts from lecture, next we give the general algorithm
More informationGQE ALGEBRA PROBLEMS
GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More informationLecture 13 The Fundamental Forms of a Surface
Lecture 13 The Fundamental Forms of a Surface In the following we denote by F : O R 3 a parametric surface in R 3, F(u, v) = (x(u, v), y(u, v), z(u, v)). We denote partial derivatives with respect to the
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationEXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants
EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For
More informationfy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))
1. Basic algebra of vector fields Let V be a finite dimensional vector space over R. Recall that V = {L : V R} is defined to be the set of all linear maps to R. V is isomorphic to V, but there is no canonical
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear
More informationChapter III. Stability of Linear Systems
1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationLecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)
Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1) Travis Schedler Thurs, Nov 18, 2010 (version: Wed, Nov 17, 2:15
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationEcon 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis
Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationLinear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form
Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =
More informationLinear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016
Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More informationNOTES ON LINEAR ODES
NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary
More informationEigenvalues and Eigenvectors A =
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More informationFirst we introduce the sets that are going to serve as the generalizations of the scalars.
Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................
More informationMath 113 Homework 5. Bowei Liu, Chao Li. Fall 2013
Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name
More informationTutorials in Optimization. Richard Socher
Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2
More informationOrdinary Differential Equations II
Ordinary Differential Equations II February 9 217 Linearization of an autonomous system We consider the system (1) x = f(x) near a fixed point x. As usual f C 1. Without loss of generality we assume x
More informationMath 312 Final Exam Jerry L. Kazdan May 5, :00 2:00
Math 32 Final Exam Jerry L. Kazdan May, 204 2:00 2:00 Directions This exam has three parts. Part A has shorter questions, (6 points each), Part B has 6 True/False questions ( points each), and Part C has
More informationTopic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis
Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of
More informationSPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS
SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint
More information5 More on Linear Algebra
14.102, Math for Economists Fall 2004 Lecture Notes, 9/23/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationMatrices A brief introduction
Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 44 Definitions Definition A matrix is a set of N real or complex
More informationNATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, January 20, Time Allowed: 150 Minutes Maximum Marks: 40
NATIONAL BOARD FOR HIGHER MATHEMATICS Research Scholarships Screening Test Saturday, January 2, 218 Time Allowed: 15 Minutes Maximum Marks: 4 Please read, carefully, the instructions that follow. INSTRUCTIONS
More informationA brief introduction to ordinary differential equations
Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)
More informationInner product spaces. Layers of structure:
Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty
More information