Linear Algebra: Matrix Eigenvalue Problems

Save this PDF as:
Size: px
Start display at page:

Transcription

1 CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1

2 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given square matrix, λ an unknown scalar, and x an unknown vector. In a matrix eigenvalue problem, the task is to determine λ s and x s that satisfy (1). Since x = 0 is always a solution for any and thus not interesting, we only admit solutions with x 0. The solutions to (1) are given the following names: The λ s that satisfy (1) are called eigenvalues of A and the corresponding nonzero x s that also satisfy (1) are called eigenvectors of A. Section 8.0 p2

3 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors Section 8.1 p3

4 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors We formalize our observation. Let A = [a jk ] be a given nonzero square matrix of dimension n n. Consider the following vector equation: (1) Ax = λx. The problem of finding nonzero x s and λ s that satisfy equation (1) is called an eigenvalue problem. Section 8.1 p4

5 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors A value of λ for which (1) has a solution x 0 is called an eigenvalue or characteristic value of the matrix A. The corresponding solutions x 0 of (1) are called the eigenvectors or characteristic vectors of A corresponding to that eigenvalue λ. The set of all the eigenvalues of A is called the spectrum of A. We shall see that the spectrum consists of at least one eigenvalue and at most of n numerically different eigenvalues. The largest of the absolute values of the eigenvalues of A is called the spectral radius of A, a name to be motivated later. Section 8.1 p5

6 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors How to Find Eigenvalues and Eigenvectors EXAMPLE 1 Determination of Eigenvalues and Eigenvectors We illustrate all the steps in terms of the matrix A Section 8.1 p6

7 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 1 (continued 1) Determination of Eigenvalues and Eigenvectors Solution. (a) Eigenvalues. These must be determined first. Equation (1) is in components 5 2 x x 1 1 Ax ; 2 2 x x 2 2 5x 2x x x 2 x x Section 8.1 p7

8 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 1 (continued 2) Determination of Eigenvalues and Eigenvectors Solution. (continued 1) (a) Eigenvalues. (continued 1) Transferring the terms on the right to the left, we get ( 5 ) x 2x (2*) 2 x ( 2 ) x This can be written in matrix notation (3*) ( A I) x 0 Because (1) is Ax λx = Ax λix = (A λi)x = 0, which gives (3*). Section 8.1 p8

9 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 1 (continued 3) Determination of Eigenvalues and Eigenvectors Solution. (continued 2) (a) Eigenvalues. (continued 2) We see that this is a homogeneous linear system. It has a nontrivial solution (an eigenvector of A we are looking for) if and only if its coefficient determinant is zero, that is, 5 2 D( ) det( A I) 2 2 (4*) 2 ( 5 )( 2 ) Section 8.1 p9

10 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 1 (continued 4) Determination of Eigenvalues and Eigenvectors Solution. (continued 3) (a) Eigenvalues. (continued 3) We call D(λ) the characteristic determinant or, if expanded, the characteristic polynomial, and D(λ) = 0 the characteristic equation of A. The solutions of this quadratic equation are λ 1 = 1 and λ 2 = 6. These are the eigenvalues of A. (b 1 ) Eigenvector of A corresponding to λ 1. This vector is obtained from (2*) with λ = λ 1 = 1, that is, 4x 2x x x Section 8.1 p10

11 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 1 (continued 5) Determination of Eigenvalues and Eigenvectors Solution. (continued 4) (b 1 ) Eigenvector of A corresponding to λ 1. (continued) A solution is x 2 = 2x 1, as we see from either of the two equations, so that we need only one of them. This determines an eigenvector corresponding to λ 1 = 1 up to a scalar multiple. If we choose x 1 = 1, we obtain the eigenvector x , Check: ( 1) x x. 2 Ax Section 8.1 p11

12 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 1 (continued 6) Determination of Eigenvalues and Eigenvectors Solution. (continued 5) (b 2 ) Eigenvector of A corresponding to λ 2. For λ = λ 2 = 6, equation (2*) becomes x 2x 0 2x 4x A solution is x 2 = x 1 /2 with arbitrary x 1. If we choose x 1 = 2, we get x 2 = 1. Thus an eigenvector of A corresponding to λ 2 = 6 is x , Check: ( 6) x x. 1 Ax Section 8.1 p12

13 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors This example illustrates the general case as follows. Equation (1) written in components is a x a x x n n 1 a x a x x n n 2 a x a x x. n1 1 nn n n Transferring the terms on the right side to the left side, we have ( a ) x a x a x 0 (2) n a x ( a ) x a x n a x a x ( a ) x 0. n1 1 n2 2 nn n n n Section 8.1 p13

14 In matrix notation, (3) ( A I) x 0. By Cramer s theorem in Sec. 7.7, this homogeneous linear system of equations has a nontrivial solution if and only if the corresponding determinant of the coefficients is zero: (4) D 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors a a a n a a a n ( ) det( A I) 0. a a a n1 n2 nn Section 8.1 p14

15 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors A λi is called the characteristic matrix and D(λ) the characteristic determinant of A. Equation (4) is called the characteristic equation of A. By developing D(λ) we obtain a polynomial of nth degree in λ. This is called the characteristic polynomial of A. Section 8.1 p15

16 Theorem 1 Eigenvalues 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors The eigenvalues of a square matrix A are the roots of the characteristic equation (4) of A. Hence an n n matrix has at least one eigenvalue and at most n numerically different eigenvalues. Section 8.1 p16

17 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors The eigenvalues must be determined first. Once these are known, corresponding eigenvectors are obtained from the system (2), for instance, by the Gauss elimination, where λ is the eigenvalue for which an eigenvector is wanted. Section 8.1 p17

18 Theorem 2 Eigenvectors, Eigenspace 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors If w and x are eigenvectors of a matrix A corresponding to the same eigenvalue λ, so are w + x (provided x w) and kx for any k 0. Hence the eigenvectors corresponding to one and the same eigenvalue λ of A, together with 0, form a vector space, called the eigenspace of A corresponding to that λ. Section 8.1 p18

19 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors In particular, an eigenvector x is determined only up to a constant factor. Hence we can normalize x, that is, multiply it by a scalar to get a unit vector. Section 8.1 p19

20 EXAMPLE 2 Multiple Eigenvalues Find the eigenvalues and eigenvectors of 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors A Section 8.1 p20

21 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 2 Multiple Eigenvalues (continued 1) Solution. For our matrix, the characteristic determinant gives the characteristic equation λ 3 λ λ + 45 = 0. The roots (eigenvalues of A) are λ 1 = 5, λ 2 = λ 3 = 3. Section 8.1 p21

22 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 2 Multiple Eigenvalues (continued 2) Solution. (continued 1) To find eigenvectors, we apply the Gauss elimination to the system (A λi)x = 0, first with λ = 5 and then with λ = 3. For λ = 5 the characteristic matrix is A I A 5I It row-reduces to / 7 48 / Section 8.1 p22

23 Solution. (continued 2) 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 2 Multiple Eigenvalues (continued 3) Hence it has rank 2. Choosing x 3 = 1 we have x 2 = 2 from x x and then x 1 = 1 from 7x 1 + 2x 2 3x 3 = Hence an eigenvector of A corresponding to λ = 5 is x 1 = [1 2 1] T. For λ = 3 the characteristic matrix A I A 3I row-reduces to Section 8.1 p23

24 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors EXAMPLE 2 Multiple Eigenvalues (continued 4) Solution. (continued 3) Hence it has rank 1 and there are 2 free variables. From x 1 + 2x 2 3x 3 = 0 we have x 1 = 2x 2 + 3x 3. Choosing (1) x 2 = 1, x 3 = 0 and (2) x 2 = 0, x 3 = 1, we obtain two linearly independent eigenvectors of A corresponding to λ = 3 x and 0 x Section 8.1 p24

25 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors The order M λ of an eigenvalue λ as a root of the characteristic polynomial is called the algebraic multiplicity of λ. The number m λ of linearly independent eigenvectors corresponding to λ is called the geometric multiplicity of λ. Thus m λ is the dimension of the eigenspace corresponding to this λ. Since the characteristic polynomial has degree n, the sum of all the algebraic multiplicities must equal n. In Example 2 for λ = 3 we have m λ = M λ = 2. Generally speaking, m λ M λ, as can be shown. The difference Δ λ = M λ m λ is called the defect of λ. Thus Δ 3 = 0 in Example 2, but positive defects Δ λ can easily occur. Section 8.1 p25

26 Theorem 3 Eigenvalues of the Transpose 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors The transpose A T of a square matrix A has the same eigenvalues as A. Section 8.1 p26

27 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices Section 8.3 p27

28 Definitions Symmetric, Skew-Symmetric, and Orthogonal Matrices A real square matrix A = [a jk ] is called symmetric if transposition leaves it unchanged, (1) A T = A, thus a kj = a jk, skew-symmetric if transposition gives the negative of A, (2) A T = A, thus a kj = a jk, orthogonal if transposition gives the inverse of A, (3) A T = A Symmetric, Skew-Symmetric, and Orthogonal Matrices Section 8.3 p28

29 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices Any real square matrix A may be written as the sum of a symmetric matrix R and a skew-symmetric matrix S, where (4) 1 1 R ( A A T T ) and S ( A A ). 2 2 Section 8.3 p29

30 Theorem 1 Eigenvalues of Symmetric and Skew-Symmetric Matrices 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices (a) The eigenvalues of a symmetric matrix are real. (b) The eigenvalues of a skew-symmetric matrix are pure imaginary or zero. Section 8.3 p30

31 Orthogonal Transformations and Orthogonal Matrices Orthogonal transformations are transformations (5) y = Ax where A is an orthogonal matrix. With each vector x in R n such a transformation assigns a vector y in R n. For instance, the plane rotation through an angle θ y1 cos sinx1 (6) y y sin cos x 2 2 is an orthogonal transformation. 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices Section 8.3 p31

32 Theorem 2 Invariance of Inner Product 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices An orthogonal transformation preserves the value of the inner product of vectors a and b in R n, defined by b1 (7) a b a T b a a. 1 n b n That is, for any a and b in R n, orthogonal n n matrix A, and u = Aa, v = Ab we have u v = a b. Hence the transformation also preserves the length or norm of any vector a in R n given by (8) a a a a T a. Section 8.3 p32

33 Theorem 3 Orthonormality of Column and Row Vectors A real square matrix is orthogonal if and only if its column vectors a 1,, a n (and also its row vectors) form an orthonormal system, that is, (10) a a a a T j k j k 0 if j k 1 if j k. 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices Section 8.3 p33

34 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices Theorem 4 Determinant of an Orthogonal Matrix The determinant of an orthogonal matrix has the value +1 or 1. Section 8.3 p34

35 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices Theorem 5 Eigenvalues of an Orthogonal Matrix The eigenvalues of an orthogonal matrix A are real or complex conjugates in pairs and have absolute value 1. Section 8.3 p35

36 8.4 Eigenbases. Diagonalization. Quadratic Forms Section 8.4 p36

37 8.4 Eigenbases. Diagonalization. Quadratic Forms Eigenvectors of an n n matrix A may (or may not!) form a basis for R n. If we are interested in a transformation y =Ax, such an eigenbasis (basis of eigenvectors) if it exists is of great advantage because then we can represent any x in R n uniquely as a linear combination of the eigenvectors x 1,, x n, say, x = c 1 x 1 + c 2 x c n x n. Section 8.4 p37

38 8.4 Eigenbases. Diagonalization. Quadratic Forms And, denoting the corresponding (not necessarily distinct) eigenvalues of the matrix A by λ 1,, λ n, we have Ax j = λ j x j, so that we simply obtain (1) y Ax A( c x c x ) c Ax c c x c n Ax n x n n n. n n This shows that we have decomposed the complicated action of A on an arbitrary vector x into a sum of simple actions along the eigenvectors of A. This is the point of an eigenbasis. Section 8.4 p38

39 8.4 Eigenbases. Diagonalization. Quadratic Forms Theorem 1 Basis of Eigenvectors If an n n matrix A has n distinct eigenvalues, then A has a basis of eigenvectors x 1,, x n for R n. Section 8.4 p39

40 8.4 Eigenbases. Diagonalization. Quadratic Forms Theorem 2 Symmetric Matrices A symmetric matrix has an orthonormal basis of eigenvectors for R n. Section 8.4 p40

41 8.4 Eigenbases. Diagonalization. Quadratic Forms Similarity of Matrices. Diagonalization DEFINITION Similar Matrices. Similarity Transformation An n n matrix Â is called similar to an n n matrix A if (4) Â = P 1 AP for some (nonsingular!) n n matrix P. This transformation, which gives Â from A, is called a similarity transformation. Section 8.4 p41

42 8.4 Eigenbases. Diagonalization. Quadratic Forms Theorem 3 Eigenvalues and Eigenvectors of Similar Matrices If Â is similar to A, then Â has the same eigenvalues as A. Furthermore, if x is an eigenvector of A, then y = P 1 x is an eigenvector of Â corresponding to the same eigenvalue. Section 8.4 p42

43 8.4 Eigenbases. Diagonalization. Quadratic Forms Theorem 4 Diagonalization of a Matrix If an n n matrix A has a basis of eigenvectors, then (5) D = X 1 AX is diagonal, with the eigenvalues of A as the entries on the main diagonal. Here X is the matrix with these eigenvectors as column vectors. Also, (5*) D m = X 1 A m X (m = 2, 3, ). Section 8.4 p43

44 EXAMPLE 4 Diagonalization 8.4 Eigenbases. Diagonalization. Quadratic Forms Diagonalize A Solution. The characteristic determinant gives the characteristic equation λ 3 λ λ = 0. The roots (eigenvalues of A) are λ 1 = 3, λ 2 = 4, λ 3 = 0. By the Gauss elimination applied to (A λi)x = 0 with λ = λ 1, λ 2, λ 3 we find eigenvectors to form X and then find X 1 by the Gauss Jordan elimination. Section 8.4 p44

45 EXAMPLE 4 (continued 1) Diagonalization Solution. (continued 1) The results are 8.4 Eigenbases. Diagonalization. Quadratic Forms x 3, x = 1, x = 1, X , X Section 8.4 p45

46 EXAMPLE 4 (continued 2) Diagonalization 8.4 Eigenbases. Diagonalization. Quadratic Forms Solution. (continued 2) Calculating AX and multiplying by X 1 from the left, we thus obtain 1 1 D X AX X A x x x X x x x Section 8.4 p46

47 Quadratic Forms. Transformation to Principal Axes By definition, a quadratic form Q in the components x 1,, x n of a vector x is a sum n 2 of terms, namely, (7) 2 a x x a x x a x. n1 n 1 n2 n 2 nn n A = [a jk ] is called the coefficient matrix of the form. We may assume that A is symmetric, because we can take offdiagonal terms together in pairs and write the result as a sum of two equal terms. Section 8.4 p47 Q n n j1 k1 a x x jk j k a x a x x a x x n 1 a x x a x a x x T x Ax n Eigenbases. Diagonalization. Quadratic Forms n n

48 EXAMPLE 5 Quadratic Form. Symmetric Coefficient Matrix Let T x Ax Here = 10 = x 3x 4x x 6x x 2x 1 x x x x 10x x 2 x Eigenbases. Diagonalization. Quadratic Forms Section 8.4 p48

49 EXAMPLE 5 (continued) Quadratic Form. Symmetric Coefficient Matrix 8.4 Eigenbases. Diagonalization. Quadratic Forms From the corresponding symmetric matrix C = [c jk ] where c jk = (a jk + a kj ), thus c 11 = 3, c 12 = c 21 = 5, c 22 = 2, we get the same result; indeed, T x Cx 3 5x 3x 5x x 5x x 2x 1 x x x x 10x x 2 x Section 8.4 p49

50 By Theorem 2, the symmetric coefficient matrix A of (7) has an orthonormal basis of eigenvectors. Hence if we take these as column vectors, we obtain a matrix X that is orthogonal, so that X 1 = X T. From (5) we thus have A = XDX 1 = XDX T. Substitution into (7) gives (8) Q = x T XDX T x. If we set X T x = y, then, since X 1 = X T, we have X 1 x = y and thus obtain (9) x = Xy. Section 8.4 p Eigenbases. Diagonalization. Quadratic Forms Furthermore, in (8) we have x T X = (X T x) T = y T and X T x = y, so that Q becomes simply (10) Q = y T Dy = λ 1 y 12 + λ 2 y λ n y n2.

51 8.4 Eigenbases. Diagonalization. Quadratic Forms Theorem 5 Principal Axes Theorem The substitution (9) transforms a quadratic form n n Q x T Ax a x x ( a a ) j1 k1 jk j k kj jk to the principal axes form or canonical form (10), where λ 1,, λ n are the (not necessarily distinct) eigenvalues of the (symmetric!) matrix A, and X is an orthogonal matrix with corresponding eigenvectors x 1,, x n, respectively, as column vectors. Section 8.4 p51

52 8.4 Eigenbases. Diagonalization. Quadratic Forms EXAMPLE 6 Transformation to Principal Axes. Conic Sections Find out what type of conic section the following quadratic form represents and transform it to principal axes: Q 17x 2 30x x 17 x Solution. We have Q = x T Ax, where A x 17 15, x x2 Section 8.4 p52

53 8.4 Eigenbases. Diagonalization. Quadratic Forms EXAMPLE 6 (continued) Transformation to Principal Axes. Conic Sections Solution. (continued 1) This gives the characteristic equation (17 λ) = 0. It has the roots λ 1 = 2, λ 2 = 32. Hence (10) becomes Q 2y 32 y We see that Q = 128 represents the ellipse 2y y 22 = 128, that is, y y Section 8.4 p53

54 8.4 Eigenbases. Diagonalization. Quadratic Forms EXAMPLE 6 (continued) Transformation to Principal Axes. Conic Sections Solution. (continued 2) If we want to know the direction of the principal axes in the x 1 x 2 -coordinates, we have to determine normalized eigenvectors from (A λi)x = 0 with λ = λ 1 = 2 and λ = λ 2 = 32 and then use (9). We get 1/ 2 1/ 2 and 1/ 2 1/ 2 Section 8.4 p54

55 8.4 Eigenbases. Diagonalization. Quadratic Forms EXAMPLE 6 (continued) Transformation to Principal Axes. Conic Sections Solution. (continued 3) hence x 1 Xy y 2 This is a 45 rotation. 1/ 2 1/ 2y 1/ 2 1/ 2, x y / 2 y / x y / 2 y / Section 8.4 p55

56 8.5 Complex Matrices and Forms. Optional Section 8.5 p56

57 8.5 Complex Matrices and Forms. Optional Notations Ā = [ā jk ] is obtained from A = [a jk ] by replacing each entry a jk = α + iβ (α, β real) with its complex conjugate ā jk = α iβ. Also, Ā T = [ā kj ] is the transpose of Ā, hence the conjugate transpose of A. Section 8.5 p57

58 8.5 Complex Matrices and Forms. Optional DEFINITION Hermitian, Skew-Hermitian, and Unitary Matrices A square matrix A = [a kj ] is called Hermitian if Ā T = A, that is, ā kj = a jk skew-hermitian if Ā T = A, that is, ā kj = a jk unitary if Ā T = A 1. Section 8.5 p58

59 8.5 Complex Matrices and Forms. Optional Eigenvalues It is quite remarkable that the matrices under consideration have spectra (sets of eigenvalues; see Sec. 8.1) that can be characterized in a general way as follows (see Fig. 163). Fig Location of the eigenvalues of Hermitian, skew-hermitian, and unitary matrices in the complex λ-plane Section 8.5 p59

60 8.5 Complex Matrices and Forms. Optional Theorem 1 Eigenvalues (a) The eigenvalues of a Hermitian matrix (and thus of a symmetric matrix) are real. (b) The eigenvalues of a skew-hermitian matrix (and thus of a skew-symmetric matrix) are pure imaginary or zero. (c) The eigenvalues of a unitary matrix (and thus of an orthogonal matrix) have absolute value 1. Section 8.5 p 60

61 8.5 Complex Matrices and Forms. Optional Theorem 2 Invariance of Inner Product A unitary transformation, that is, y = Ax with a unitary matrix A, preserves the value of the inner product (4), hence also the norm (5). Section 8.5 p 61

62 8.5 Complex Matrices and Forms. Optional DEFINITION Unitary System A unitary system is a set of complex vectors satisfying the relationships (6) a a a a T j k j k 0 if j k 1 if j k. Section 8.5 p62

63 8.5 Complex Matrices and Forms. Optional Theorem 4 Determinant of a Unitary Matrix Let A be a unitary matrix. Then its determinant has absolute value one, that is, det A = 1. Section 8.5 p 63

64 8.5 Complex Matrices and Forms. Optional Theorem 5 Basis of Eigenvectors A Hermitian, skew-hermitian, or unitary matrix has a basis of eigenvectors for C n that is a unitary system. Section 8.5 p 64

65 8.5 Complex Matrices and Forms. Optional Hermitian and Skew-Hermitian Forms The concept of a quadratic form (Sec. 8.4) can be extended T to complex. We call the numerator x Ax in (1) a form in the components x 1,, x n of x, which may now be complex. This form is again a sum of n 2 terms (7) T x Ax n n j1 k1 a x x jk j k a x x a x x n 1 a x x a x x n 2 a x x a x x. n1 n 1 nn n n n n Section 8.5 p65

66 8.5 Complex Matrices and Forms. Optional Hermitian and Skew-Hermitian Forms (continued) A is called its coefficient matrix. The form is called a Hermitian or skew-hermitian form if A is Hermitian or skew-hermitian, respectively. The value of a Hermitian form is real, and that of a skew-hermitian form is pure imaginary or zero. Section 8.5 p66

67 SUMMARY OF CHAPTER 8 Linear Algebra: Matrix Eigenvalue Problems Section 8.Summary p67

68 SUMMARY OF CHAPTER 8 Linear Algebra: Matrix Eigenvalue Problems The practical importance of matrix eigenvalue problems can hardly be overrated. The problems are defined by the vector equation (1) Ax = λx. A is a given square matrix. All matrices in this chapter are square. λ is a scalar. To solve the problem (1) means to determine values of λ, called eigenvalues (or characteristic values) of A, such that (1) has a nontrivial solution x (that is, x 0), called an eigenvector of A corresponding to that λ. An n n matrix has at least one and at most n numerically different eigenvalues. Section 8.Summary p68

69 (continued 1) SUMMARY OF CHAPTER 8 Linear Algebra: Matrix Eigenvalue Problems These are the solutions of the characteristic equation (Sec. 8.1) a a a (2) D n a a a n ( ) det( A I) 0. a a a n1 n2 nn D(λ) is called the characteristic determinant of A. By expanding it we get the characteristic polynomial of A, which is of degree n in λ. Some typical applications are shown in Sec Section 8.Summary p69

70 (continued 2) SUMMARY OF CHAPTER 8 Linear Algebra: Matrix Eigenvalue Problems Section 8.3 is devoted to eigenvalue problems for symmetric (A T = A), skew-symmetric (A T = A), and orthogonal matrices (A T = A 1 ). Section 8.4 concerns the diagonalization of matrices and the transformation of quadratic forms to principal axes and its relation to eigenvalues. Section 8.5 extends Sec. 8.3 to the complex analogs of those real matrices, called Hermitian (A T = A), skew-hermitian (A T T = A), and unitary matrices A A 1. All the eigenvalues of a Hermitian matrix (and a symmetric one) are real. For a skew-hermitian (and a skew-symmetric) matrix they are pure imaginary or zero. For a unitary (and an orthogonal) matrix they have absolute value 1. Section 8.Summary p70

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

1. General Vector Spaces

1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

Chap 3. Linear Algebra

Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

Eigenvalues and Eigenvectors

/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

Math Matrix Algebra

Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,

CHAPTER 3. Matrix Eigenvalue Problems

A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

Foundations of Matrix Analysis

1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

8. Diagonalization.

8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

Review of Linear Algebra

Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

Diagonalizing Matrices

Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

Econ Slides from Lecture 7

Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

Linear Algebra Review. Vectors

Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

Fundamentals of Engineering Analysis (650163)

Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

The Singular Value Decomposition

The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

LINEAR ALGEBRA. VECTOR CALCULUS

im7.qxd 9//5 :9 PM Page 5 Part B LINEAR ALGEBRA. VECTOR CALCULUS Part B consists of Chap. 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Chap. 8 Linear Algebra: Matrix Eigenvalue Problems

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

Chapter 3 Transformations

Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

1 Linear Algebra Problems

Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

Linear Algebra. Workbook

Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

235 Final exam review questions

5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

Definitions for Quizzes

Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

Math Matrix Algebra

Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space

Mathematical Methods wk 2: Linear Operators

John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

Linear Algebra Highlights

Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

CS 246 Review of Linear Algebra 01/17/19

1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

Properties of Linear Transformations from R n to R m

Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

6 Inner Product Spaces

Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

Mathematical Methods for Engineers and Scientists 1

K.T. Tang Mathematical Methods for Engineers and Scientists 1 Complex Analysis, Determinants and Matrices With 49 Figures and 2 Tables fyj Springer Part I Complex Analysis 1 Complex Numbers 3 1.1 Our Number

4. Determinants.

4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

Math 315: Linear Algebra Solutions to Assignment 7

Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

Linear Algebra Primer

Linear Algebra Primer D.S. Stutts November 8, 995 Introduction This primer was written to provide a brief overview of the main concepts and methods in elementary linear algebra. It was not intended to

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

Exercise Sheet 1.

Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

Linear Algebra. Min Yan

Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

Linear Algebra Primer

Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

Math 21b. Review for Final Exam

Math 21b. Review for Final Exam Thomas W. Judson Spring 2003 General Information The exam is on Thursday, May 15 from 2:15 am to 5:15 pm in Jefferson 250. Please check with the registrar if you have a

c Igor Zelenko, Fall

c Igor Zelenko, Fall 2017 1 18: Repeated Eigenvalues: algebraic and geometric multiplicities of eigenvalues, generalized eigenvectors, and solution for systems of differential equation with repeated eigenvalues

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

6 EIGENVALUES AND EIGENVECTORS

6 EIGENVALUES AND EIGENVECTORS INTRODUCTION TO EIGENVALUES 61 Linear equations Ax = b come from steady state problems Eigenvalues have their greatest importance in dynamic problems The solution of du/dt

Jordan Canonical Form

Jordan Canonical Form Massoud Malek Jordan normal form or Jordan canonical form (named in honor of Camille Jordan) shows that by changing the basis, a given square matrix M can be transformed into a certain

MATHEMATICS 217 NOTES

MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

The Eigenvalue Problem: Perturbation Theory

Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

Conceptual Questions for Review

Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

and let s calculate the image of some vectors under the transformation T.

Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

A Brief Outline of Math 355

A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

LinGloss. A glossary of linear algebra

LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of