The Eigenvalue Problem: Perturbation Theory

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "The Eigenvalue Problem: Perturbation Theory"

Transcription

1 Jim Lambers MAT 610 Summer Session Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just as the problem of solving a system of linear equations Ax = b can be sensitive to perturbations in the data, the problem of computing the eigenvalues of a matrix can also be sensitive to perturbations in the matrix. We will now obtain some results concerning the extent of this sensitivity. Suppose that A is obtained by perturbing a diagonal matrix D by a matrix F whose diagonal entries are zero; that is, A = D + F. If λ is an eigenvalue of A with corresponding eigenvector x, then we have (D λi)x + F x = 0. If λ is not equal to any of the diagonal entries of A, then D λi is nonsingular and we have Taking -norms of both sides, we obtain which yields x = (D λi) 1 F x. x = (D λi) 1 F x (D λi) 1 F x, (D λi) 1 F = max It follows that for some i, 1 i n, λ satisfies d ii λ 1 i n j=1,j =i j=1,j =i f ij. f ij d ii λ 1. That is, λ lies within one of the Gerschgorin circles in the complex plane, that has center a ii and radius r i = a ij. j=1,j =i 1

2 This is result is known as the Gerschgorin Circle Theorem. Example The eigenvalues of the matrix are The Gerschgorin disks are A = λ(a) = {6.4971, , }. D 1 = {z C z 7 4}, D 2 = {z C z 2 3}, D 3 = {z C z + 5 2}. We see that each disk contains one eigenvalue. It is important to note that while there are n eigenvalues and n Gerschgorin disks, it is not necessarily true that each disk contains an eigenvalue. The Gerschgorin Circle Theorem only states that all of the eigenvalues are contained within the union of the disks. Another useful sensitivity result that applies to diagonalizable matrices is the Bauer-Fike Theorem, which states that if X 1 AX = diag(λ 1,..., λ n ), and μ is an eigenvalue of a perturbed matrix A + E, then min λ μ κ p(x) E p. λ λ(a) That is, μ is within κ p (X) E p of an eigenvalue of A. It follows that if A is nearly nondiagonalizable, which can be the case if eigenvectors are nearly linearly dependent, then a small perturbation in A could still cause a large change in the eigenvalues. It would be desirable to have a concrete measure of the sensitivity of an eigenvalue, just as we have the condition number that measures the sensitivity of a system of linear equations. To that end, we assume that λ is a simple eigenvalue of an n n matrix A that has Jordan canonical form J = X 1 AX. Then, λ = J ii for some i, and x i, the ith column of X, is a corresponding right eigenvector. If we define Y = X H = (X 1 ) H, then y i is a left eigenvector of A corresponding to λ. From Y H X = I, it follows that y H x = 1. We now let A, λ, and x be functions of a parameter ε that satisfy A(ε)x(ε) = λ(ε)x(ε), A(ε) = A + εf, F 2 = 1. Differentiating with respect to ε, and evaluating at ε = 0, yields F x + Ax (0) = λx (0) + λ (0)x. 2

3 Taking the inner product of both sides with y yields y H F x + y H Ax (0) = λy H x (0) + λ (0)y H x. Because y is a left eigenvector corresponding to λ, and y H x = 1, we have We conclude that y H F x + λy H x (0) = λy H x (0) + λ (0). λ (0) = y H F x y 2 F 2 x 2 y 2 x 2. However, because θ, the angle between x and y, is given by it follows that cos θ = yh x y 2 x 2 = λ (0) 1 cos θ. 1 y 2 x 2, We define 1/ cos θ to be the condition number of the simple eigenvalue λ. We require λ to be simple because otherwise, the angle between the left and right eigenvectors is not unique, because the eigenvectors themselves are not unique. It should be noted that the condition number is also defined by 1/ y H x, where x and y are normalized so that x 2 = y 2 = 1, but either way, the condition number is equal to 1/ cos θ. The interpretation of the condition number is that an O(ε) perturbation in A can cause an O(ε/ cos θ ) perturbation in the eigenvalue λ. Therefore, if x and y are nearly orthogonal, a large change in the eigenvalue can occur. Furthermore, if the condition number is large, then A is close to a matrix with a multiple eigenvalue. Example The matrix A = has a simple eigenvalue λ = with left and right eigenvectors x = [ ] T, y = [ ] T such that y H x = 1. It follows that the condition number of this eigenvalue is x 2 y 2 = In fact, the nearby matrix B =

4 has a double eigenvalue that is equal to 2. We now consider the sensitivity of repeated eigenvalues. First, it is important to note that while the eigenvalues of a matrix A are continuous functions of the entries of A, they are not necessarily differentiable functions of the entries. To see this, we consider the matrix A = [ 1 a ε 1 where a > 0. Computing its characteristic polynomial ], det(a λi) = λ 2 2λ + 1 aε and computings its roots yields the eigenvalues λ = 1 ± aε. Differentiating these eigenvalues with respect to ε yields dλ a dε = ± ε, which is undefined at ε = 0. In general, an O(ε) perturbation in A causes an O(ε 1/p ) perturbation in an eigenvalue associated with a p p Jordan block, meaning that the more defective an eigenvalue is, the more sensitive it is. We now consider the sensitivity of eigenvectors, or, more generally, invariant subspaces of a matrix A, such as a subspace spanned by the first k Schur vectors, which are the first k columns in a matrix Q such that Q H AQ is upper triangular. Suppose that an n n matrix A has the Schur decomposition A = QT Q H, Q = [ Q 1 Q 2 ], T = [ T11 T 12 0 T 22 where Q 1 is n r and T 11 is r r. We define the separation between the matrices T 11 and T 22 by T 11 X XT 22 F sep(t 11, T 22 ) = min. X =0 X F It can be shown that an O(ε) perturbation in A causes a O(ε/sep(T 11, T 22 )) perturbation in the invariant subspace Q 1. We now consider the case where r = 1, meaning that Q 1 is actually a vector q 1, that is also an eigenvector, and T 11 is the corresponding eigenvalue, λ. Then, we have sep(λ, T 22 ) = λx XT 22 F min X =0 X F = min y 2 =1 yh (T 2 2 λi) 2 = min y 2 =1 (T 22 λi) H y 2 = σ min ((T 22 λ I ) H ) = σ min (T 22 λi), 4 ],

5 since the Frobenius norm of a vector is equivalent to the vector 2-norm. Because the smallest singular value indicates the distance to a singular matrix, sep(λ, T 22 ) provides a measure of the separation of λ from the other eigenvalues of A. It follows that eigenvectors are more sensitive to perturbation if the corresponding eigenvalues are clustered near one another. That is, eigenvectors associated with nearby eigenvalues are wobbly. It should be emphasized that there is no direct relationship between the sensitivity of an eigenvalue and the sensitivity of its corresponding invariant subspace. The sensitivity of a simple eigenvalue depends on the angle between its left and right eigenvectors, while the sensitivity of an invariant subspace depends on the clustering of the eigenvalues. Therefore, a sensitive eigenvalue, that is nearly defective, can be associated with an insensitive invariant subspace, if it is distant from other eigenvalues, while an insensitive eigenvalue can have a sensitive invariant subspace if it is very close to other eigenvalues. The Symmetric Eigenvalue Problem In the symmetric case, the Gerschgorin circles become Gerschgorin intervals, because the eigenvalues of a symmetric matrix are real. Example The eigenvalues of the 3 3 symmetric matrix A = are The Gerschgorin intervals are λ(a) = { , , }. D 1 = {x R x 14 4}, D 2 = {x R x 4 5}, D 3 = {x R x }. We see that each intervals contains one eigenvalue. The characterization of the eigenvalues of a symmetric matrix as constrained maxima of the Rayleight quotient lead to the following results about the eigenvalues of a perturbed symmetric matrix. As the eigenvalues are real, and therefore can be ordered, we denote by λ i (A) the ith largest eigenvalue of A. Theorem (Wielandt-Hoffman) If A and A + E are n n symmetric matrices, then (λ i (A + E) λ i (A)) 2 E 2 F. i=1 It is also possible to bound the distance between individual eigenvalues of A and A + E. 5

6 Theorem If A and A + E are n n symmetric matrices, then λ n (E) λ k (A + E) λ k (A) λ 1 (E). Furthermore, λ k (A + E) λ k (A) E 2. The second inequality in the above theorem follows directly from the first, as the 2-norm of the symmetric matrix E, being equal to its spectral radius, must be equal to the larger of the absolute value of λ 1 (E) or λ n (E). Theorem (Interlacing Property) If A is an n n symmetric matrix, and A r is the r r leading principal minor of A, then, for r = 1, 2,..., n 1, λ r+1 (A r+1 ) λ r (A r ) λ r (A r+1 ) λ 2 (A r+1 ) λ 1 (A r ) λ 1 (A r+1 ). For a symmetric matrix, or even a more general normal matrix, the left eigenvectors and right eigenvectors are the same, from which it follows that every simple eigenvalue is perfectly conditioned ; that is, the condition number 1/ cos θ is equal to 1 because θ = 0 in this case. However, the same results concerning the sensitivity of invariant subspaces from the nonsymmetric case apply in the symmetric case as well: such sensitivity increases as the eigenvalues become more clustered, even though there is no chance of a defective eigenvalue. This is because for a nondefective, repeated eigenvalue, there are infinitely many possible bases of the corresponding invariant subspace. Therefore, as the eigenvalues approach one another, the eigenvectors become more sensitive to small perturbations, for any matrix. Let Q be an n r matrix with orthonormal columns, meaning that Q T 1 Q 1 = I r. If it spans an invariant subspace of an n n symmetric matrix A, then AQ 1 = Q 1 S, where S = Q T 1 AQ 1. On the other hand, if range(q 1 ) is not an invariant subspace, but the matrix AQ 1 Q 1 S = E 1 is small for any given r r symmetric matrix S, then the columns of Q 1 define an approximate invariant subspace. It turns out that E 1 F is minimized by choosing S = Q T 1 AQ 1. Furthermore, we have AQ 1 S 1 S F = P 1 AQ 1 F, where P1 = I Q 1Q T 1 is the orthogonal projection into (range(q 1)), and there exist eigenvalues μ 1,..., μ r λ(a) such that μ k λ k (S) 2 E 1 2, k = 1,..., r. 6

7 That is, r eigenvalues of A are close to the eigenvalues of S, which are known as Ritz values, while the corresponding eigenvectors are called Ritz vectors. If (θ k, y k ) is an eigenvalue-eigenvector pair, or an eigenpair of S, then, because S is defined by S = Q T 1 AQ 1, it is also known as a Ritz pair. Furthermore, as θ k is an approximate eigenvalue of A, Q 1 y k is an approximate corresponding eigenvector. To see this, let σ k (not to be confused with singular values) be an eigenvalue of S, with eigenvector y k. We multiply both sides of the equation Sy k = σ k y k by Q 1 : Q 1 Sy k = σ k Q 1 y k. Then, we use the relation AQ 1 Q 1 S = E 1 to obtain Rearranging yields If we let x k = Q 1 y k, then we conclude (AQ 1 E 1 )y k = σ k Q 1 y k. A(Q 1 y k ) = σ k (Q 1 y k ) + E 1 y k. Ax k = σ k x k + E 1 y k. Therefore, E 1 is small in some norm, Q 1 y k is nearly an eigenvector. 7

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today

Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today AM 205: lecture 22 Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today Posted online at 5 PM on Thursday 13th Deadline at 5 PM on Friday 14th Covers material up to and including

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

c 1997 Society for Industrial and Applied Mathematics Vol. 39, No. 2, pp , June

c 1997 Society for Industrial and Applied Mathematics Vol. 39, No. 2, pp , June SIAM REV. c 997 Society for Industrial and Applied Mathematics Vol. 39, No. 2, pp. 254 29, June 997 003 COMPUTING AN EIGENVECTOR WITH INVERSE ITERATION ILSE C. F. IPSEN Abstract. The purpose of this paper

More information

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

More information

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues Lecture Notes: Eigenvalues and Eigenvectors Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Definitions Let A be an n n matrix. If there

More information

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 3 - EIGENVECTORS AND EIGENVALUES

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 3 - EIGENVECTORS AND EIGENVALUES MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 3 - EIGENVECTORS AND EIGENVALUES This is the third tutorial on matrix theory. It is entirely devoted to the subject of Eigenvectors and Eigenvalues

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS September 26, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will

More information

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws Chapter 9 Eigenanalysis Contents 9. Eigenanalysis I.................. 49 9.2 Eigenanalysis II................. 5 9.3 Advanced Topics in Linear Algebra..... 522 9.4 Kepler s laws................... 537

More information

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified. PhD Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2 EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system

More information

Math 25a Practice Final #1 Solutions

Math 25a Practice Final #1 Solutions Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis

More information

Matrices and Deformation

Matrices and Deformation ES 111 Mathematical Methods in the Earth Sciences Matrices and Deformation Lecture Outline 13 - Thurs 9th Nov 2017 Strain Ellipse and Eigenvectors One way of thinking about a matrix is that it operates

More information

Lecture 7 Spectral methods

Lecture 7 Spectral methods CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

EIGENVALUES AND EIGENVECTORS

EIGENVALUES AND EIGENVECTORS EIGENVALUES AND EIGENVECTORS Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal This is equivalent to

More information

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Practice Problems for the Final Exam

Practice Problems for the Final Exam Practice Problems for the Final Exam Linear Algebra. Matrix multiplication: (a) Problem 3 in Midterm One. (b) Problem 2 in Quiz. 2. Solve the linear system: (a) Problem 4 in Midterm One. (b) Problem in

More information

MATH JORDAN FORM

MATH JORDAN FORM MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

More information

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Linear Algebra Final Exam Study Guide Solutions Fall 2012 . Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize

More information

Agenda: Understand the action of A by seeing how it acts on eigenvectors.

Agenda: Understand the action of A by seeing how it acts on eigenvectors. Eigenvalues and Eigenvectors If Av=λv with v nonzero, then λ is called an eigenvalue of A and v is called an eigenvector of A corresponding to eigenvalue λ. Agenda: Understand the action of A by seeing

More information

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. Spanning set Let S be a subset of a vector space V. Definition. The span of the set S is the smallest subspace W V that contains S. If

More information

Algebra I Fall 2007

Algebra I Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

18.06 Problem Set 2 Solution

18.06 Problem Set 2 Solution 18.06 Problem Set 2 Solution Total: 100 points Section 2.5. Problem 24: Use Gauss-Jordan elimination on [U I] to find the upper triangular U 1 : 1 a b 1 0 UU 1 = I 0 1 c x 1 x 2 x 3 = 0 1 0. 0 0 1 0 0

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 5. Eigenvectors & Eigenvalues Math 233 Linear Algebra 5. Eigenvectors & Eigenvalues Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

SPRING OF 2008 D. DETERMINANTS

SPRING OF 2008 D. DETERMINANTS 18024 SPRING OF 2008 D DETERMINANTS In many applications of linear algebra to calculus and geometry, the concept of a determinant plays an important role This chapter studies the basic properties of determinants

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

1. Elements of linear algebra

1. Elements of linear algebra Elements of linear algebra Contents Solving systems of linear equations 2 Diagonal form of a square matrix 3 The Jordan normal form of a square matrix 4 The Gram-Schmidt orthogonalization process 5 The

More information

Problem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.

Problem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit. CSE 151: Introduction to Machine Learning Winter 2017 Problem Set 1 Instructor: Kamalika Chaudhuri Due on: Jan 28 Instructions This is a 40 point homework Homeworks will graded based on content and clarity

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 1 Ridge Regression Ridge regression and the Lasso are two forms of regularized

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Orthogonal iteration revisited

Orthogonal iteration revisited Week 10 11: Friday, Oct 30 and Monday, Nov 2 Orthogonal iteration revisited Last time, we described a generalization of the power methods to compute invariant subspaces. That is, starting from some initial

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

On the mathematical background of Google PageRank algorithm

On the mathematical background of Google PageRank algorithm Working Paper Series Department of Economics University of Verona On the mathematical background of Google PageRank algorithm Alberto Peretti, Alberto Roveda WP Number: 25 December 2014 ISSN: 2036-2919

More information

Homework set 4 - Solutions

Homework set 4 - Solutions Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W

More information

HW #1 Solutions: M552 Spring 2006

HW #1 Solutions: M552 Spring 2006 HW #1 Solutions: M55 Spring 006 1. (1.4-Trefethen & Bau) Let f 1,...,f 8 be a set of functions defined on the interval [1,8] with the property that for any numbers d 1,...,d 8, there exists a set of coefficients

More information

Optimization of Quadratic Functions

Optimization of Quadratic Functions 36 CHAPER 4 Optimization of Quadratic Functions In this chapter we study the problem (4) minimize x R n x Hx + g x + β, where H R n n is symmetric, g R n, and β R. It has already been observed that we

More information

Elementary Linear Algebra Review for Exam 3 Exam is Friday, December 11th from 1:15-3:15

Elementary Linear Algebra Review for Exam 3 Exam is Friday, December 11th from 1:15-3:15 Elementary Linear Algebra Review for Exam 3 Exam is Friday, December th from :5-3:5 The exam will cover sections: 6., 6.2, 7. 7.4, and the class notes on dynamical systems. You absolutely must be able

More information

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30 NATIONAL BOARD FOR HIGHER MATHEMATICS M. A. and M.Sc. Scholarship Test September 25, 2010 Time Allowed: 150 Minutes Maximum Marks: 30 Please read, carefully, the instructions on the following page 1 INSTRUCTIONS

More information

Math 20F Practice Final Solutions. Jor-el Briones

Math 20F Practice Final Solutions. Jor-el Briones Math 2F Practice Final Solutions Jor-el Briones Jor-el Briones / Math 2F Practice Problems for Final Page 2 of 6 NOTE: For the solutions to these problems, I skip all the row reduction calculations. Please

More information

Role of Linear Algebra in Numerical Analysis

Role of Linear Algebra in Numerical Analysis Indian Statistical Institute, Kolkata Numerical Analysis (BStat I) Instructor: Sourav Sen Gupta Scribe: Sayan Bhadra, Sinchan Snigdha Adhikary Date of Lecture: 18 January 2016 LECTURE 2 Role of Linear

More information

An Algorithmist s Toolkit September 15, Lecture 2

An Algorithmist s Toolkit September 15, Lecture 2 18.409 An Algorithmist s Toolkit September 15, 007 Lecture Lecturer: Jonathan Kelner Scribe: Mergen Nachin 009 1 Administrative Details Signup online for scribing. Review of Lecture 1 All of the following

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Week 10: Wednesday and Friday, Oct 24 and 26 Orthogonal iteration to QR On Monday, we went through a somewhat roundabout algbraic path from orthogonal subspace iteration to the QR iteration. Let me start

More information

Linear Algebra Problems

Linear Algebra Problems Math 504 505 Linear Algebra Problems Jerry L. Kazdan Topics 1 Basics 2 Linear Equations 3 Linear Maps 4 Rank One Matrices 5 Algebra of Matrices 6 Eigenvalues and Eigenvectors 7 Inner Products and Quadratic

More information

Lecture 2: Lattices and Bases

Lecture 2: Lattices and Bases CSE 206A: Lattice Algorithms and Applications Spring 2007 Lecture 2: Lattices and Bases Lecturer: Daniele Micciancio Scribe: Daniele Micciancio Motivated by the many applications described in the first

More information

Information Retrieval

Information Retrieval Introduction to Information CS276: Information and Web Search Christopher Manning and Pandu Nayak Lecture 13: Latent Semantic Indexing Ch. 18 Today s topic Latent Semantic Indexing Term-document matrices

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Eigenvalues and Eigenvectors Fall 2015 1 / 14 Introduction We define eigenvalues and eigenvectors. We discuss how to

More information

Highest-weight Theory: Verma Modules

Highest-weight Theory: Verma Modules Highest-weight Theory: Verma Modules Math G4344, Spring 2012 We will now turn to the problem of classifying and constructing all finitedimensional representations of a complex semi-simple Lie algebra (or,

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

Sandwich Theorem and Calculation of the Theta Function for Several Graphs

Sandwich Theorem and Calculation of the Theta Function for Several Graphs Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2003-03-7 Sandwich Theorem and Calculation of the Theta Function for Several Graphs Marcia Ling Riddle Brigham Young University

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Matrix Operations: Determinant

Matrix Operations: Determinant Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant

More information

Eigenvalues and Eigenvectors 7.2 Diagonalization

Eigenvalues and Eigenvectors 7.2 Diagonalization Eigenvalues and Eigenvectors 7.2 Diagonalization November 8 Goals Suppose A is square matrix of order n. Provide necessary and sufficient condition when there is an invertible matrix P such that P 1 AP

More information

L3: Review of linear algebra and MATLAB

L3: Review of linear algebra and MATLAB L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Math 4 - Summary Notes

Math 4 - Summary Notes Math 4 - Summary Notes July, 0 Chat: what is class about? Mathematics of Economics, NOT Economics! Lecturer NOT Economist - have some training, but reluctant to discuss strict economic models because much

More information

Problem Set # 1 Solution, 18.06

Problem Set # 1 Solution, 18.06 Problem Set # 1 Solution, 1.06 For grading: Each problem worths 10 points, and there is points of extra credit in problem. The total maximum is 100. 1. (10pts) In Lecture 1, Prof. Strang drew the cone

More information

Full-State Feedback Design for a Multi-Input System

Full-State Feedback Design for a Multi-Input System Full-State Feedback Design for a Multi-Input System A. Introduction The open-loop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C

More information

Linear Algebra. Carleton DeTar February 27, 2017

Linear Algebra. Carleton DeTar February 27, 2017 Linear Algebra Carleton DeTar detar@physics.utah.edu February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding

More information

Skew-Symmetric Matrix Polynomials and their Smith Forms

Skew-Symmetric Matrix Polynomials and their Smith Forms Skew-Symmetric Matrix Polynomials and their Smith Forms D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann March 23, 2013 Abstract Two canonical forms for skew-symmetric matrix polynomials

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

11 Adjoint and Self-adjoint Matrices

11 Adjoint and Self-adjoint Matrices 11 Adjoint and Self-adjoint Matrices In this chapter, V denotes a finite dimensional inner product space (unless stated otherwise). 11.1 Theorem (Riesz representation) Let f V, i.e. f is a linear functional

More information

STABILITY FOR PARABOLIC SOLVERS

STABILITY FOR PARABOLIC SOLVERS Review STABILITY FOR PARABOLIC SOLVERS School of Mathematics Semester 1 2008 OUTLINE Review 1 REVIEW 2 STABILITY: EXPLICIT METHOD Explicit Method as a Matrix Equation Growing Errors Stability Constraint

More information

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX MICHIEL E. HOCHSTENBACH, WEN-WEI LIN, AND CHING-SUNG LIU Abstract. In this paper, we present an inexact inverse

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

7 Eigenvalues, eigenvectors, diagonalization

7 Eigenvalues, eigenvectors, diagonalization LINEAR ALGEBRA: THEORY. Version: August, 000 9 7 Eigenvalues, eigenvectors, diagonalization 7. Overview of diagonalizations We have seen that a transformation matrix looks completely different in different

More information

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting

More information

An Introduction To Linear Algebra. Kuttler

An Introduction To Linear Algebra. Kuttler An Introduction To Linear Algebra Kuttler April, 7 Contents Introduction 7 F n 9 Outcomes 9 Algebra in F n Systems Of Equations Outcomes Systems Of Equations, Geometric Interpretations Systems Of Equations,

More information

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract

More information

Least squares Solution of Homogeneous Equations

Least squares Solution of Homogeneous Equations Least squares Solution of Homogeneous Equations supportive text for teaching purposes Revision: 1.2, dated: December 15, 2005 Tomáš Svoboda Czech Technical University, Faculty of Electrical Engineering

More information