DIAGONALIZABLE LINEAR SYSTEMS AND STABILITY. 1. Algebraic facts. We first recall two descriptions of matrix multiplication.

Size: px
Start display at page:

Download "DIAGONALIZABLE LINEAR SYSTEMS AND STABILITY. 1. Algebraic facts. We first recall two descriptions of matrix multiplication."

Transcription

1 DIAGONALIZABLE LINEAR SYSTEMS AND STABILITY. 1. Algebraic facts. We first recall two descriptions of matrix multiplication. Let A be n n, P be n r, given by its columns: P = [v 1 v 2... v r ], where the v i R n. Then the n r matrix AP, given by columns, is: AP = [Av 1 Av 2... Av r ], Av i R n. We used the fact that A acts as a linear operator on R n (via left multiplication of column vectors). So we can say left multiplication by A corresponds to letting A act on each column vector. Now let P = [v 1... v r ] be n r (given by its column vectors v i R n ), and let B be r r. Then the n r matrix P B is obtained in the following way: the r th column of P B (a vector in R n ) is the linear combination of the v i, with coefficients given by the entries in the r th column of B: column i (B) = (c 1, c 2,..., c r ) column i (P B) = c 1 v 1 +c 2 v c r v r R n. Briefly, right multiplication by a square matrix takes linear combinations of columns. Direct sums of subspaces. Let E 1, E 2,..., E r be subspaces R n, with the property that any two intersect only at the zero vector: E i E j = {0}, i j. The subspace of R n consisting of all linear combinations: c 1 v c r v r with c i R, v i E i is called the direct sum of E 1,..., E r, denoted by E 1... E r. In particular, if we have: E 1... E r = R n, and B i is a basis of E i for each i = 1,..., r, the union B = B 1... B r is a basis of R n. 2. Matrices diagonalizable over R. Recall a real number λ is an eigenvalue of the n n matrix A if the vector equation Av = λv has nonzero solutions v R n, v 0. Equivalently the subspace of R n : E(λ) = {v R n Av = λv} is not the trivial subspace {0}. In this case E(λ) is the eigenspace for λ. We say the matrix A is diagonalizable over R if R n admits a basis consisting of eigenvectors of A. Equivalently, if we let E(λ i ), i = 1,..., r be the eigenspaces of A (with λ i the eigenvalues), we have: E(λ 1 )... E(λ r ) = R n. 1

2 Let B = {e 1,..., e n } be a basis of R n consisting of eigenvectors of A: Ae i = λ i e i, where i = 1,..., n (so the λ i are not necessarily all distinct). Form the n n matrices: P = [e 1 e 2... e n ], Λ = diag(λ 1,..., λ n ). (That is, Λ is the diagonal matrix with the entries given). We have: AP = [Ae 1... Ae n ] = [λ 1 e 1... λ n e n ] = P Λ. (Use both descriptions of matrix multiplication recalled above.) The matrix P is invertible, and this matrix equation has the equivalent forms: A = P ΛP 1, Λ = P 1 AP. This implies, for powers and the exponential of A: A k = P Λ k P 1, e A = P e Λ P 1, where e Λ = diag(e λ 1,..., e λn ). And for any t R: e ta = P e tλ P 1 = P diag(e tλ 1,..., e tλn )P 1. Solution of the ODE system. Any v R n admits an expression as a linear combination: v = c 1 e 1 + c 2 e c n e n, c i R, and then the solution of the ODE system x = Ax with initial condition x(0) = v is: x(t) = c 1 e tλ 1 e c n e tλn e n, t R. We use this to give an interpretation of the matrix e ta = [x 1 (t) x 2 (t)... x n (t)] (by columns). With e i as above, let: P = [e 1... e n ], P 1 = [w 1... w n ], w i R n. Since P P 1 = I n (the n n identity matrix), given w r = (w 1 r,..., w n r ) (the r th column vector of P 1 ), we have: w 1 re 1 + w 2 re w n r e n = e 0 r, r = 1,..., n, 2

3 the r rth vector of the standard basis of R n. Since e tλ is a diagonal matrix, the r th column of the matrix e tλ P 1 is: (e tλ 1 w 1 r, e tλ 2 w 2 r,..., e tλn w n r ). Since e ta = P e tλ P 1, the r th column of e ta is: e tλ 1 w 1 re 1 + e tλ 2 w 2 re e tλn w n r e n. (Just use the second matrix multiplication fact.) Thus we find that the r th column of e ta is the solution x r (t) of the ODE system x = Ax with initial condition x(0) = e 0 r, the r th vector of the standard basis of R n. If v = (v 1, v 2,..., v n ) is the expression of v in the standard basis, by definition this means v = n r=1 vr e 0 r. Then e ta v = n r=1 vr (e ta e 0 r), and e ta e 0 r is (when expressed in the standard basis) the r th column of e ta, which as seen above is x r (t), the solution with initial condition e 0 r. But this implies e ta v = n r=1 vr x r (t), the solution with initial condition n r=1 vr e 0 r, or v. This justifies the claim that e ta v is the solution with initial condition v. Change of coordinates. The invertible matrix P = [e 1... e n ] defines a linear change of coordinates in R n : x(t) = P y(t), y(t) = P 1 x(t). Then: y = P 1 x = P 1 Ax = P 1 AP y = Λy, y(0) = P 1 x(0) = P 1 v. Thus the change of coodinates takes the original ODE system to a diagonal system (decoupled, hence easily solved.) Stable, neutral and unstable subspaces. We define the stable subspace E s as the direct sum of all the eigenspaces E(λ), where λ < 0; the unstable subspace E u as the direct sum of all eigenspaces E(λ) with λ > 0; the neutral subspace E 0 as the eigenspace E(0). From the preceding discussion it is easy to see that: e ta v 0 as t + if v E s ; Thus we have: e ta v as t + if v E u ; e ta v v (constant solution), if v E 0. E s E 0 E u = R n. 3

4 4. Complex eigenvalues. Let A be an n n matrix with real entries. A complex number µ = a + ib (with b 0) is an eigenvalue of A if there exists a non-zero complex vector v = e + if (e, f R n ) so that Av = µv. The eigenspace of µ is denoted E c (λ). (The subscript c reminds us that these are vectors with complex entries). It follows that the complex conjugate µ is also an eigenvalue, and in fact the spaces E c (µ) and E c ( µ) have the same dimension. We re interested in interpreting this in terms of real numbers and vectors in R n. Considering the real and imaginary parts of the defining equation: we obtain the system: A(e + if) = (a + ib)(e + if), Ae = ae bf, Af = be + af. This may be written in matrix form as: A[e f] = [e f]m, M = [(a, b) (b, a)] (by columns). This implies (exercise) we also have: A k [e f] = [e f]m k for all k 1, e ta [e f] = [e f]e tm. Using the definition of matrix exponential and the Taylor expansions of sin(bt), cos(bt), one shows that: M = [(a, b) (b, a)] e tm = [e ta (cos(bt), sin(bt)) e ta (sin(bt), cos(bt))] (matrices given by their column vectors). This implies: [e f]e tm = [e at (cos(bt)e sin(bt)f) e at (sin(bt)e + cos(bt)f)]. This suggests the solutions of the system x = Ax with initial conditions e and f (vectors in R n, a real eigenpair for µ) are given by (respectively): x e (t) = e at (cos(bt)e sin(bt)f), x f (t) = e at (sin(bt)e + cos(bt)f). Exercise: Verify this directly. In the special case Re(µ) = a = 0 we have: x e (t) = cos(bt)e sin(bt)f, x f (t) = sin(bt)e + cos(bt)f. 4

5 In general, if µ = a + ib is a complex eigenvalue, we may find a subspace E r (µ) = E r ( µ) ( r for real ) of R n of even dimension 2q, invariant under A, with a basis of the form {e 1, f 1, e 2, f 2,..., e q, f q }, so that for r = 1,..., q we have, as above: Ae r = ae r bf r, Af r = be r + af r. And for the solutions of x = Ax with initial conditions e r, f r (respectively): x er (t) = e at (cos(bt)e r sin(bt)f r ), x fr (t) = e at (sin(bt)e r + cos(bt)f r ). If a = 0, all solutions in E r (µ) = E r (ib) are periodic, with period T = 2π/b (we may assume b > 0). 5. Matrices diagonalizable over C. Definition. An n n matrix A with real entries is diagonalizable over C if R n admits a basis B = {e 1, e 2,..., e p, v 1, w 1, v 2, w 2,..., v q, w q }, where each e i is an eigenvector for a real eigenvalue of A, and each pair v j, w j is a real eigenpair for a complex (non-real) eigenvalue of A. (Of course, p + 2q = n.) Equivalently, the direct sum of eigenspaces E(λ i ) over all real eigenvalues λ i of A and real eigenspaces E r (µ j ) over all complex (non-real) eigenvalues µ j of A is R n : E(λ 1 )... E(λ N ) E r (µ 1 )... E r (µ M ) = R n. If this is the case, letting P = [e 1... e p v 1 w 1... v q w q ] (by columns), P is an invertible n n matrix satisfying: where Λ is the n n matrix: P 1 AP = Λ, A = P ΛP 1, Λ = diag(λ 1, λ 2,..., λ p, M 1, M 2,..., M q ). Here M j is the 2 2 block associated to the complex (non-real) eigenvalue µ j = a j + ib j : M j = [(a j, b j ) (b j, a j )] (by columns). It follows that e ta = P e tλ P 1, where: e tλ = diag(e tλ 1, e tλ 2,..., e tλp, e tm 1,..., e tmq ), 5

6 with e tm j the 2 2 block: e tm j = e ta j [(cos(b j t), sin(b j t)) (sin(b j t), cos(b j t))] (by columns), j = 1,..., q. We already know how to write down the solutions x(t) of x = Ax for the initial conditions e i, v j, w j, and the solution for general initial conditions follows by taking linear combinations. Stable, unstable and neutral subspaces. We define the stable subsplace E s as the direct sum of all eigenspaces E(λ) for λ R, λ < 0 and E r (µ), for µ complex (non-real) with Re(µ) < 0; the unstable space E u as the direct sum of all eigenspaces E(λ) for λ R, λ > 0 and E r (µ), for µ complex (non-real) with Re(µ) > 0; and the neutral subspace E 0 as the direct sum of the eigenspace E(0) for and the eigenspaces E r (µ), for µ complex (non-real) with Re(µ) = 0. From the earlier discussion, we see that: e ta v 0 as t + if v E s ; e ta v as t + if v E u ; e ta v is bounded for all t R (possibly constant), if v E 0. Thus we also have in the complex- diagonalizable case: E s E 0 E u = R n. Remark. Note that in the case v E 0 we may not conclude the solution x(t) is periodic, as discussed in the next example. 6. An example: two coupled harmonic oscillators. Reference: [Waltman, Section 12.] A simple harmonic oscillator with frequency ω is the mechanical system with one degree of freedom described by the equation: q = ω 2 q, q(t) R, or as a first-order Hamiltonian system: (q, p) = (p, ω 2 q), (q(t), p(t)) R 2. The Hamiltonian is E(q, p) = 1 2 (p2 + ω 2 q 2 ). 6

7 Consider two such oscillators coupled by a spring with constant k > 0. The second-order equations of motion are: q 1 = ω 2 q 1 k(q 1 q 2 ), q 2 = ω 2 q 2 + k(q 1 q 2 ), (q 1 (t), q 2 (t)) R 2. As a first-order system for x(t) = (q 1, p 1, q 2, p 2 ) R 4 : q 1 = p 1, p 1 = ω 2 q 1 k(q 1 q 2 ), q 2 = p 2, p 2 = ω 2 q 2 + k(q 1 q 2 ). This is also a Hamiltonian system, with Hamiltonian given by: E(q 1, p 1, q 2, p 2 ) = 1 2 [p2 1 + p ω 2 (q q 2 2) + k(q 1 q 2 ) 2 ]. Exercise: Verify this. In matrix form the system can be written as x = Ax, where A is 4 4 and x = (q 1, p 1, q 2, p 2 ) R 4. The eigenvalues of A are ±iω and ±i ω 2 + 2k := ±α, with eigenvectors (1, iω, 1, iω) and (1, iα, 1, iα) (see [Waltman] for the computation). Thus we may take as a basis for E r (iω) R 4 : e 1 = (1, 0, 1, 0), f 1 = (0, ω, 0, ω), and as a basis for E r (ˆω): e 2 = (1, 0, 1, 0), f 2 = (0, α, 0, α). With P = [e 1 f 1 e 2 f 2 ], we have P 1 AP = Λ, where: Λ = diag(m 1, M 2 ), M 1 = [(0, ω) (ω, 0)], M 2 = [(0, α) (α, 0)] (by columns). The solutions in R 4 with initial conditions e 1, f 1, e 2, f 2 are: x e1 (t) = cos(ωt)e 1 sin(ωt)f 1, x f1 (t) = sin(ωt)e 1 + cos(ωt)f 1 ; x e2 (t) = cos(αt)e 2 sin(ˆωt)f 2, x f2 (t) = sin(αt)e 2 + cos(αt)f 2. All other solutions can be obtained from these four by taking linear combinations. Note that each of these solutions is periodic (with least period 2π/ω or 2π/α). But consider the solution x v (t) with initial condition v = (1, 0, 0, 0) = 1 2 (e 1 + e 2 ). Its first component is x 1 v(t) = 1 (cos(ωt) + cos(αt)); 2 7

8 and this function is not periodic, unless we have a relation of the form: n ω = m, for some natural numbers m, n. α In particular this implies the ratio ω/α is a rational number, which won t be the case for randomly chosen ω and k. For instance, if k = ω 2 (all spring constants equal) we have α = 3ω, so the solution with initial condition corresponding to one body released from rest (p 1 = 0 at q 1 = 1), the other released from rest at its equilibrium position (p 2 = q 2 = 0) is not periodic. All we can say is that x(t) is bounded for all t R. A second conserved quantity. There is a different way to change coordinaets so as to decouple the system, which is physicaly more meaningful. Note that adding the equations for the q i and for the p i we find: (q 1 + q 2 ) = p 1 + p 2, (p 1 + p 2 ) = ω 2 (q 1 + q 2 ). Thus q 1 + q 2 describes a simple harmonic motion with frequency ω. On the other hand, taking the difference of the equations of motion we find: (q 1 q 2 ) = (p 1 p 2 ), (p 1 p 2 ) = (ω 2 + 2k)(q 1 q 2 ). This means q 1 q 2 describes a simple harmonic motion with frequency α = ω 2 + 2k. So we may set: q + = q 1 + q 2, p + = q +, q = q 1 q 2, p = q. This corresponds to making the change of variable x = P y, y = P 1 x, where y = (q +, p +, q, p ) and P 1 is the 4 4 symmetric matrix with orthogonal columns: P 1 = [(1, 0, 1, 0) (0, 1, 0, 1) (1, 0, 1, 0) (0, 1, 0, 1)] (given by columns). The matrix Λ ± = P 1 AP has the decoupled form: Λ ± = diag(m +, M ), M + = [(0, ω 2 ) (1, 0)], M = [(0, α 2 ) (1, 0)] (by columns). It follows that the quantities: E + (q 1, q 2, p 1, p 2 ) = 1 2 [(p 1 + p 2 ) 2 + ω 2 (q 1 + q 2 ) 2 ], E (q 1, q 2, p 1, p 2 ) = 1 2 [(p 1 p 2 ) 2 + α 2 (q 1 q 2 ) 2 ] 8

9 are conserved by the system. (Exercise: check this directly.) The conserved quantities E, E +, E are not all independent (for instance, E + +E = 2E). But any two of them are. (Exercise: Verify this.) (Note: Two conserved quantities E, F are independent if their gradient vectors E, F are linearly independent at every point.) Geometric interpretation ( advanced ). For each E 0 > 0, the level set {(q 1, p 1, q 2, p 2 ) R 4 E(q 1, p 1, q 2, p 2 ) = E 0 } is a three-dimensional surface in R 4. It is a a bounded surface, since E is a proper function in R 4. It is also regular (well-defined tangent 3-plane at each point), since the gradient E 0 vanishes only at the origin of R 4 (check this), which is not a point of the level set if E 0 > 0. Since E is conserved, any orbit that starts on a particular level set of E stays on that level set. Hence orbits stay on three-dimensional surfaces. Using the coordinates y = (q +, p +, q, p ) in R 4 we can say more. The three-dimensional level sets of E + = 1 2 (p2 + + ω 2 q 2 +) have the form (ellipse) R 2, and similarly for the level sets of E = 1 2 (p2 + α 2 q 2 ). The joint level set: M(c 1, c 2 ) = {y R (p2 + + ω 2 q 2 +) = c 1, 1 2 (p2 + + ω 2 q 2 ) = c 2 )}, c 1, c 2 > 0 is a two-dimensional surfaces of the type (ellipse) (ellipse), i.e. a torus. The solutions stay on the same torus as the initial condition, and on this torus, the system can be described as the flow along parallel lines of slope α/ω. Exercise (challenging): Generalize this discussion to the case of N oscillators connected by springs, assuming all the spring constants are equal (so ω 2 = k). What are the eigenvalues in this case? (This is a first order Hamiltonian system in R 2N.) 9

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 4 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University April 15, 2018 Linear Algebra (MTH 464)

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,

More information

Dynamic interpretation of eigenvectors

Dynamic interpretation of eigenvectors EE263 Autumn 2015 S. Boyd and S. Lall Dynamic interpretation of eigenvectors invariant sets complex eigenvectors & invariant planes left eigenvectors modal form discrete-time stability 1 Dynamic interpretation

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Constant coefficients systems

Constant coefficients systems 5.3. 2 2 Constant coefficients systems Section Objective(s): Diagonalizable systems. Real Distinct Eigenvalues. Complex Eigenvalues. Non-Diagonalizable systems. 5.3.. Diagonalizable Systems. Remark: We

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST me me ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST 1. (1 pt) local/library/ui/eigentf.pg A is n n an matrices.. There are an infinite number

More information

Copyright (c) 2006 Warren Weckesser

Copyright (c) 2006 Warren Weckesser 2.2. PLANAR LINEAR SYSTEMS 3 2.2. Planar Linear Systems We consider the linear system of two first order differential equations or equivalently, = ax + by (2.7) dy = cx + dy [ d x x = A x, where x =, and

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

systems of linear di erential If the homogeneous linear di erential system is diagonalizable,

systems of linear di erential If the homogeneous linear di erential system is diagonalizable, G. NAGY ODE October, 8.. Homogeneous Linear Differential Systems Section Objective(s): Linear Di erential Systems. Diagonalizable Systems. Real Distinct Eigenvalues. Complex Eigenvalues. Repeated Eigenvalues.

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

MATH 1553-C MIDTERM EXAMINATION 3

MATH 1553-C MIDTERM EXAMINATION 3 MATH 553-C MIDTERM EXAMINATION 3 Name GT Email @gatech.edu Please read all instructions carefully before beginning. Please leave your GT ID card on your desk until your TA scans your exam. Each problem

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS September 26, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Stability of Dynamical systems

Stability of Dynamical systems Stability of Dynamical systems Stability Isolated equilibria Classification of Isolated Equilibria Attractor and Repeller Almost linear systems Jacobian Matrix Stability Consider an autonomous system u

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will

More information

Math Final December 2006 C. Robinson

Math Final December 2006 C. Robinson Math 285-1 Final December 2006 C. Robinson 2 5 8 5 1 2 0-1 0 1. (21 Points) The matrix A = 1 2 2 3 1 8 3 2 6 has the reduced echelon form U = 0 0 1 2 0 0 0 0 0 1. 2 6 1 0 0 0 0 0 a. Find a basis for the

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics Diagonalization MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Motivation Today we consider two fundamental questions: Given an n n matrix A, does there exist a basis

More information

Review Notes for Linear Algebra True or False Last Updated: January 25, 2010

Review Notes for Linear Algebra True or False Last Updated: January 25, 2010 Review Notes for Linear Algebra True or False Last Updated: January 25, 2010 Chapter 3 [ Eigenvalues and Eigenvectors ] 31 If A is an n n matrix, then A can have at most n eigenvalues The characteristic

More information

1 Review of simple harmonic oscillator

1 Review of simple harmonic oscillator MATHEMATICS 7302 (Analytical Dynamics YEAR 2017 2018, TERM 2 HANDOUT #8: COUPLED OSCILLATIONS AND NORMAL MODES 1 Review of simple harmonic oscillator In MATH 1301/1302 you studied the simple harmonic oscillator:

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions Linear Algebra (MATH 4) Spring 2 Final Exam Practice Problem Solutions Instructions: Try the following on your own, then use the book and notes where you need help. Afterwards, check your solutions with

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

Diagonalization. P. Danziger. u B = A 1. B u S.

Diagonalization. P. Danziger. u B = A 1. B u S. 7., 8., 8.2 Diagonalization P. Danziger Change of Basis Given a basis of R n, B {v,..., v n }, we have seen that the matrix whose columns consist of these vectors can be thought of as a change of basis

More information

Def. (a, b) is a critical point of the autonomous system. 1 Proper node (stable or unstable) 2 Improper node (stable or unstable)

Def. (a, b) is a critical point of the autonomous system. 1 Proper node (stable or unstable) 2 Improper node (stable or unstable) Types of critical points Def. (a, b) is a critical point of the autonomous system Math 216 Differential Equations Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan November

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Systems of Linear Differential Equations Chapter 7

Systems of Linear Differential Equations Chapter 7 Systems of Linear Differential Equations Chapter 7 Doreen De Leon Department of Mathematics, California State University, Fresno June 22, 25 Motivating Examples: Applications of Systems of First Order

More information

Calculus C (ordinary differential equations)

Calculus C (ordinary differential equations) Calculus C (ordinary differential equations) Lesson 9: Matrix exponential of a symmetric matrix Coefficient matrices with a full set of eigenvectors Solving linear ODE s by power series Solutions to linear

More information

Linear Algebra II Lecture 13

Linear Algebra II Lecture 13 Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since

More information

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous): Chapter 2 Linear autonomous ODEs 2 Linearity Linear ODEs form an important class of ODEs They are characterized by the fact that the vector field f : R m R p R R m is linear at constant value of the parameters

More information

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations Math 2 Lecture Notes Linear Two-dimensional Systems of Differential Equations Warren Weckesser Department of Mathematics Colgate University February 2005 In these notes, we consider the linear system of

More information

Lecture 3 Eigenvalues and Eigenvectors

Lecture 3 Eigenvalues and Eigenvectors Lecture 3 Eigenvalues and Eigenvectors Eivind Eriksen BI Norwegian School of Management Department of Economics September 10, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 3 Eigenvalues and Eigenvectors

More information

21 Symmetric and skew-symmetric matrices

21 Symmetric and skew-symmetric matrices 21 Symmetric and skew-symmetric matrices 21.1 Decomposition of a square matrix into symmetric and skewsymmetric matrices Let C n n be a square matrix. We can write C = (1/2)(C + C t ) + (1/2)(C C t ) =

More information

Lecture 7. Please note. Additional tutorial. Please note that there is no lecture on Tuesday, 15 November 2011.

Lecture 7. Please note. Additional tutorial. Please note that there is no lecture on Tuesday, 15 November 2011. Lecture 7 3 Ordinary differential equations (ODEs) (continued) 6 Linear equations of second order 7 Systems of differential equations Please note Please note that there is no lecture on Tuesday, 15 November

More information

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018 MATH 1553 SAMPLE FINAL EXAM, SPRING 2018 Name Circle the name of your instructor below: Fathi Jankowski Kordek Strenner Yan Please read all instructions carefully before beginning Each problem is worth

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini DM554 Linear and Integer Programming Lecture 9 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. More on 2. 3. 2 Resume Linear transformations and

More information

Solutions Chapter 9. u. (c) u(t) = 1 e t + c 2 e 3 t! c 1 e t 3c 2 e 3 t. (v) (a) u(t) = c 1 e t cos 3t + c 2 e t sin 3t. (b) du

Solutions Chapter 9. u. (c) u(t) = 1 e t + c 2 e 3 t! c 1 e t 3c 2 e 3 t. (v) (a) u(t) = c 1 e t cos 3t + c 2 e t sin 3t. (b) du Solutions hapter 9 dode 9 asic Solution Techniques 9 hoose one or more of the following differential equations, and then: (a) Solve the equation directly (b) Write down its phase plane equivalent, and

More information

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

More information

Chapter 5. Eigenvalues and Eigenvectors

Chapter 5. Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Section 5. Eigenvectors and Eigenvalues Motivation: Difference equations A Biology Question How to predict a population of rabbits with given dynamics:. half of the

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1. In this problem, if the statement is always true, circle T; otherwise, circle F. Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation

More information

Section 9.3 Phase Plane Portraits (for Planar Systems)

Section 9.3 Phase Plane Portraits (for Planar Systems) Section 9.3 Phase Plane Portraits (for Planar Systems) Key Terms: Equilibrium point of planer system yꞌ = Ay o Equilibrium solution Exponential solutions o Half-line solutions Unstable solution Stable

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9 Bachelor in Statistics and Business Universidad Carlos III de Madrid Mathematical Methods II María Barbero Liñán Homework sheet 4: EIGENVALUES AND EIGENVECTORS DIAGONALIZATION (with solutions) Year - Is

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST me me ft-uiowa-math255 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 2/3/2 at :3pm CST. ( pt) Library/TCNJ/TCNJ LinearSystems/problem3.pg Give a geometric description of the following

More information

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws Chapter 9 Eigenanalysis Contents 9. Eigenanalysis I.................. 49 9.2 Eigenanalysis II................. 5 9.3 Advanced Topics in Linear Algebra..... 522 9.4 Kepler s laws................... 537

More information

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS EE0 Linear Algebra: Tutorial 6, July-Dec 07-8 Covers sec 4.,.,. of GS. State True or False with proper explanation: (a) All vectors are eigenvectors of the Identity matrix. (b) Any matrix can be diagonalized.

More information

Name: Final Exam MATH 3320

Name: Final Exam MATH 3320 Name: Final Exam MATH 3320 Directions: Make sure to show all necessary work to receive full credit. If you need extra space please use the back of the sheet with appropriate labeling. (1) State the following

More information

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

First Midterm Exam Name: Practice Problems September 19, x = ax + sin x.

First Midterm Exam Name: Practice Problems September 19, x = ax + sin x. Math 54 Treibergs First Midterm Exam Name: Practice Problems September 9, 24 Consider the family of differential equations for the parameter a: (a Sketch the phase line when a x ax + sin x (b Use the graphs

More information

MATH. 20F SAMPLE FINAL (WINTER 2010)

MATH. 20F SAMPLE FINAL (WINTER 2010) MATH. 20F SAMPLE FINAL (WINTER 2010) You have 3 hours for this exam. Please write legibly and show all working. No calculators are allowed. Write your name, ID number and your TA s name below. The total

More information

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4 Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information

Definition: An n x n matrix, "A", is said to be diagonalizable if there exists a nonsingular matrix "X" and a diagonal matrix "D" such that X 1 A X

Definition: An n x n matrix, A, is said to be diagonalizable if there exists a nonsingular matrix X and a diagonal matrix D such that X 1 A X DIGONLIZTION Definition: n n x n matrix, "", is said to be diagonalizable if there exists a nonsingular matrix "X" and a diagonal matrix "D" such that X X D. Theorem: n n x n matrix, "", is diagonalizable

More information

Diagonalization. Hung-yi Lee

Diagonalization. Hung-yi Lee Diagonalization Hung-yi Lee Review If Av = λv (v is a vector, λ is a scalar) v is an eigenvector of A excluding zero vector λ is an eigenvalue of A that corresponds to v Eigenvectors corresponding to λ

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

Linear Algebra and ODEs review

Linear Algebra and ODEs review Linear Algebra and ODEs review Ania A Baetica September 9, 015 1 Linear Algebra 11 Eigenvalues and eigenvectors Consider the square matrix A R n n (v, λ are an (eigenvector, eigenvalue pair of matrix A

More information

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A = AMS1 HW Solutions All credit is given for effort. (- pts for any missing sections) Problem 1 ( pts) Consider the following matrix 1 1 9 a. Calculate the eigenvalues of A. Eigenvalues are 1 1.1, 9.81,.1

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

Autonomous system = system without inputs

Autonomous system = system without inputs Autonomous system = system without inputs State space representation B(A,C) = {y there is x, such that σx = Ax, y = Cx } x is the state, n := dim(x) is the state dimension, y is the output Polynomial representation

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

Eigenspaces and Diagonalizable Transformations

Eigenspaces and Diagonalizable Transformations Chapter 2 Eigenspaces and Diagonalizable Transformations As we explored how heat states evolve under the action of a diffusion transformation E, we found that some heat states will only change in amplitude.

More information

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name: Math 34/84 - Exam Blue Exam Solutions December 4, 8 Instructor: Dr. S. Cooper Name: Read each question carefully. Be sure to show all of your work and not just your final conclusion. You may not use your

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

Quizzes for Math 304

Quizzes for Math 304 Quizzes for Math 304 QUIZ. A system of linear equations has augmented matrix 2 4 4 A = 2 0 2 4 3 5 2 a) Write down this system of equations; b) Find the reduced row-echelon form of A; c) What are the pivot

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Understanding the Matrix Exponential

Understanding the Matrix Exponential Transformations Understanding the Matrix Exponential Lecture 8 Math 634 9/17/99 Now that we have a representation of the solution of constant-coefficient initial-value problems, we should ask ourselves:

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information