Lecture Note 13: Eigenvalue Problem for Symmetric Matrices

Size: px
Start display at page:

Download "Lecture Note 13: Eigenvalue Problem for Symmetric Matrices"

Transcription

1 MATH 5330: Computational Methods of Linear Algebra Lecture Note 13: Eigenvalue Problem for Symmetric Matrices 1 The Jacobi Algorithm Xianyi Zeng Department of Mathematical Sciences, UTEP Let A be real symmetric, then its eigenvalue decomposition is given by: A = QΛQ t, (11) where Q is orthogonal If we can find this decomposition exactly (at least with exact arithmetics), all the eigenvalues and eigenvectors will be obtained Unfortunately, we cannot construct an algorithm that can accomplish this task in reasonable time, say cubic in the size of the matrix The idea of the Jacobi eigenvalue algorithm is to find a factorization: A = QDQ t, (12) where Q is orthogonal and D is close to diagonal, and the hope is that we are able to quantify the difference between the true eigenvalues of A and the diagonal elements of D, as well as the eigenvectors of A and the column vectors of Q Since the target is to find an approximation to (11), the Jacobi algorithm is a combination of the factorization methods and the iterative methods we ve seen so far Let us begin by finding an orthogonal matrix Q that will turn an off-diagonal element of A into zero Let A = QBQ t or equivalently B = Q t AQ such that b ij = b ji = 0 for some i < j A good tool of choice is to use the Givens rotations and particularly here Q = G ij (θ) for some angle θ Let c = cos(θ) and s = sin(θ), then Q and Q t look like: c s 0 0 c s 0 Q = and Q t = 0 s c 0 0 s c Hence it is not difficult to see that in A B = Q t AQ only the i-th row and the j-th row as well as 1

2 the i-th column and the j-th column are changed; particularly this transformation reads: a 1i a ni a 1j a i1 a ii a ij a in (13) a j1 a ji a jj a jn a nj ca 1i +sa 1j sa 1i +ca 1j ca i1 +sa j1 c 2 a ii +cs(a ij +a ji )+s 2 a jj cs(a ii a jj ) s 2 a ji +c 2 a ij ca in +sa jn sa i1 +ca j1 cs(a ii a jj ) s 2 a ij +c 2 a ji s 2 a ii cs(a ij +a ji )+c 2 a jj sa in +ca jn ca ni +sa nj sa ni +ca nj In particular: b ij = cs(a jj a ii )+(c 2 s 2 )a ij The objective is to have b ij = 0 while satisfying c 2 +s 2 = 1 It is not difficult to solve this system to obtain: (4+β 2 )c 4 (4+β 2 )c 2 +1 = 0, where β = (a jj a ii )/a ij, assuming that a ij 0 1 Hence c 2 = (4+β2 )± β 2 (4+β 2 ) 2(4+β 2 ) and s 2 = (4+β2 ) β 2 (4+β 2 ) 2(4+β 2 ) β 2 = 1 2 ± β 2, = β β 2 It leads to two pairs of acceptable solutions, one of which is given by: 1 c = 2 1 β 1 2, s = 4+β β 2 (14) 4+β 2 If we were able to use a chain of such Givens rotations to eliminate all the off-diagonal elements one-by-one, the result would be perfect since we reduce A to a similar diagonal matrix in polynomial cost Unfortunately, the Givens rotations will not only modify the four elements at the intersections of the two columns and the two rows, but all the other elements as well Thus it is nearly impossible to use such operations alone to find the eigenvalue decomposition of A later Givens rotations will make zero off-diagonal elements non-zero again! 1 When a ij = 0, we do not need to do anything and will simply set Q = I 2

3 So how are the Givens rotations useful here? The answer is that every transform moves some part of the Frobenius norm of A to the diagonal elements If we consider the previous example, we have A F = B F and: b 2 ii +b 2 jj = a 2 ii +a 2 ij +a 2 ji +a 2 jj b 2 ij b 2 ji = a 2 ii +a 2 ij +a 2 ji +a 2 jj Hence the sum of the squares of the diagonal elements of A is increased by a 2 ij + a2 ji after the transformation This fact motivates us to develop an iterative method, such that in each iteration we find the off-diagonal element with the largest (or at least larger than average) magnitude and use a Givens rotation to eliminate this element as shown before In particular, we define a function E( ) of symmetric matrices as: E(A) = a 2 ij, (15) i j then in the Jacobi method we start with A (0) = A and whenever A (k) = [a (k) ij ],k 0 is already constructed, a pair of indices (i k,j k ),i k < j k is identified such that: (a (k) i k j k ) 2 1 n(n 1) E(A(k) ) Next we construct the Givens matrix G k that eliminates a (k) i k j k A (k+1) = G t k A(k) G k from A (k) and define: This process will continue until E(A (k) ) is smaller than a prescribed tolerance where By construction, we have: Thus an a priori estimate is given by: E(A (k+1) ) = E(A (k) ) (a (k) i k j k ) 2 (a (k) j k i k ) 2 ρe(a (k) ), 2 ρ = 1 n(n 1) < 1 E(A (k) ) ρ k E(A) (16) In order for this quantity to get below a small number ε > 0, at most: iterations are required Note that ln(ε/e(a)) ln(ρ) 1 lnρ n2, n(n 1) and each iteration requires O(n) flops, the total computational cost to achieve a given threshold for E( ) is O(n 3 ) Lastly, we need to relate the error function E(D) where D = Q t AQ with the quality of the approximation of D to the diagonal matrix Λ of eigenvalues of A Let ˆΛ be the diagonal part of D, 3

4 then the identity  = QˆΛQ t provides the eigenvalue decomposition for a matrix  that is close to A, in the sense that: A  = QDQ t QˆΛQ t F = D ˆΛ = E(D) ε F F Our desired bounds are obtained from the next lemmas in perturbation theory Lemma 1 Let A and  be real symmetric matrices in Rn n, and let their eigenvalues be λ 1 λ n and ˆλ 1 ˆλ n, respectively Then: n (λ i ˆλ i ) 2 A  2 (17) F i=1 The proof is not trivial, for which we will need a theorem by Birkhoff Theorem 11 (Birkhoff) An non-negative matrix A is called doubly stochastic if and only if all its row sums and column sums are exactly one Then A R n n is doubly stochasitic if and only if it can be written as: A = α 1 P 1 + +α N P N, (18) where α i R are positive numbers such that N i=1 α i = 1; P i, 1 i N are permutation matrices, and N n 2 n+1 The sufficiency is obvious and we omit the full detail of the proof of the necessety Roughly speaking, the strategy is to remove from A a scaled permutation matrix a time 2, such that after each removal the remainder part is still a doubly stockastic matrix and has at least one less non-zero than the matrix before the removal As a corollary of the Birkhoff s theorem, the minimum of a concave real-valued function on the set of doubly stochastic matrices in R n n is attained at a permutation matrix In deed, let f( ) be such a concave function and A be doubly stochastic, then by the expansion (18) we see: f(a) α 1 f(p 1 )+α 2 f(p 2 )+ +α N f(p N ) min f(p ) P is a permutatuin matrix Now we get back to Lemma 1 Proof Consider the eigenvalue decompositions of A and Â: A=UΛU t and Â=V ˆΛV t where U and V are orthogonal matrices Defining W = U t V = [w ij ], we have: A  = UΛU t V ˆΛV t F = ΛW W ˆΛ = (ˆλ i λ j ) 2 wij 2 F F i,j 2 Another lemma is required to prove this is doable for any doubly stochastic matrix This proof is elementary and it requires the expansion of the characteristic polynomial of A 4

5 Let H = [h ij ] be defined by h ij = wij 2, it has the following row sums and column sums: i : h ij = ( u ki v kj ) 2 = u k1 iv k1 ju k2 iv k2 j j j k j k 1 k 2 = u k1 iu k2 i v k1 jv k2 j = u ki u ki = 1 ; k 1 k 2 j k j : h ij = ( u ki v kj ) 2 = u k1 iv k1 ju k2 iv k2 j i i k i k 1 k 2 = ( ) v k1 jv k2 j u k1 iu k2 i = v kj v kj = 1 k 1 k 2 i k Thus H is doubly stochastic and clearly f(m) = i,j (ˆλ i λ j ) 2 m ij is a linear function of M, hence it is also concave By the corollary of the Birkhoff theorem we just showed, f(h) f(p ) for some permutation matrix P Suppose P t is defined by the permutation σ, ie P = [p ij ] satisfies p ij = δ iσ(j), then: f(h) f(p ) = i,j (ˆλ i λ j )p ij = j To this end, we showed that for this permutation σ: (ˆλ σ(i) λ i ) 2 A Â F i (ˆλ σ(j) λ j ) 2 The last step we need to show is clear: If λ 1 λ 2 λ n and ˆλ 1 ˆλ 2 ˆλ n then: (ˆλ i λ i ) 2 (ˆλ σ(i) λ i ) 2 i i In fact, this inequality is true for any permutation σ; and the method to prove the general case is the method of contradiction That is, let σ 0 be a permutation such that i (ˆλ σ(i) λ i ) 2 achieves its minimum among all permutations σ, we want to show ˆλ σ0 (i) ˆλ σ0 (j) as long as λ i < λ j Indeed, if this is not the case for some i j such that λ i < λ j, we construct a permutation σ 1 such that: σ 1 (k) = σ 0 (k) k i,j ; σ 0 (j) k = i ; σ 0 (i) k = j Thus (ˆλ σ0 (k) λ k ) 2 (ˆλ σ1 (k) λ k ) 2 k k =(ˆλ σ0 (i) λ i ) 2 +(ˆλ σ0 (j) λ j ) 2 (ˆλ σ1 (i) λ i ) 2 (ˆλ σ1 (j) λ j ) 2 =2ˆλ σ0 (j)λ i +2ˆλ σ0 (i)λ j 2ˆλ σ0 (i)λ i +2ˆλ σ0 (j)λ j =2(ˆλ σ0 (j) ˆλ σ0 (i))(λ i λ j ) > 0, which is a contradiction of the choice σ 0 5

6 Lemma 1 provides us confidence in the quality of the eigenvalue estimates of the Jacobi algorithm; and it is actually a special case of the Hoffman Wieland Theorem [1], which deals with the eigenvalues of any two normal complex matrices We furthermore would like to have a similar result for the eigenvectors However, expecting the column vectors of Q in A = QDQ t to be good apprximations to the ones in the true eigenvalue decopmosition is not realistic One reason is that the set of eigenvectors is not unique (even up to a multiplier of ±1) if A has eigenvalues of multiplicity larger than one For this reason, perturbation theory deals with the eigenprojections instead the eigenprojection P λ of an eigenvalue λ is the projection onto its eigenspace Using the resolvent theory of complex analysis, people have shown that when two matrices are close to each other, the eigenprojections of close-by eigenvalues are also close to each other [2] 2 Rayleigh Quotient Iteration A symmetric matrix A R n n has eigenvalues λ 1 λ 2 λ n, and its spectral norm (L 2 -norm) is either λ n or λ 1 Let us assume for simplicity λ n > max( λ n 1, λ 1 ) Suppose v i, 1 i n is a set of orthonormal basis such that v i is an eigenvalue of λ i Then for any vector x 0 such that: x 0 = i α i v i, α n 0, and for any positive integer m we have: A m x 0 = i α i λ m i v i and Indeed, let y m = A m x 0 then y m 2 = i α2 i λ2m i α j λ m j = i α2 i λ2m i A m x 0 A m x 0 v n as m and we see: α j i α2 i (λ i/λ j ) 2m { 0 j n, 1 j = n since in the denominator the term (λ n /λ j ) 2m unless j =n This is known as the power method Algorithm 21 The Power Method 1: Set ε 0 > 0 and x 0 such that x 0 = 1 2: for i = 1,2, do 3: Compute y i = Ax i 1 4: Compute x i = y i / y i 5: if x i x i 1 < ε 0 then 6: Break 7: end if 8: end for 6

7 The power method works for matrices with a simple dominant eigenvalue; and even in this case it does not work for all initial guesses But if x t 0 v n 0, the solution converges linearly to an eigenvector of the dominant eigenvalue with a worst scenario rate max( λ n 1 /λ n, λ 1 /λ n ) If A is non-singular, the dominant eigenvalue of A 1 corresponds to the eigenvalue of A that is closest to zero The inverse iteration method is essentially the power method applied to A 1, but without forming A 1 explicitly Algorithm 22 The Inverse Iteration Method 1: Set ε 0 > 0 and x 0 such that x 0 = 1 2: for i = 1,2, do 3: Solve Ay i = x i 1 4: Compute x i = y i / y i 5: if x i x i 1 < ε 0 then 6: Break 7: end if 8: end for As in the power method, let λ 1 be the eigenvalue of A that is closest to zero and λ 2 be the second closest; then if x t 0 v 1 0, where v 1 is an eigenvector of λ 1, the inverse iteration method converges linearly to an eigenvector of λ 1 at the rate no worse than λ 1 /λ 2 Both the power method and the inverse iteration method can be applied to a shifted matrix A µi for some real number µ In the case of shifted power method, it will still converges to the eigenvector of either λ 1 or λ n of A (λ 1 µ or λ n µ of A µi), but at a different rate (Exercise 2) In the case of shifted inverse iteration method, however, we can choose µ properly so that the method converges to an eigenvector of almost any eigenvalue of A For example, if λ k < λ k+1 and we choose µ = λ k +ɛ for some small 0 < ɛ < (λ k+1 λ k )/2, the shifted inverse iteration method will converge to an eigenvector of λ k linearly at the worst rate ɛ/(λ k+1 ɛ λ k ) when λ k is a simple eigenvalue Clearly, the smaller ɛ is the better convergence rate is guaranteed The problem with the shifted inverse iteration method is that we do not know ahead what the eigenvalues are In the Rayleigh Quotient Iteration (RQI) method, this estimate is given by the Rayleigh quotient ρ(x) == def (x t Ax)/(x t x) From Algorithm 23, we can see that the QRI method is a shifted inverse iteration method with varying shifts; and there is actually no evidence that the method has been used by Lord Rayleigh in his study of the principal eigenvalue of vibrating systems The sequence {x i } generated by the method is called the Rayleigh sequence 7

8 Algorithm 23 The Rayleigh Quotient Iteration Method 1: Set ε 0 > 0 and x 0 such that x 0 = 1 2: for i = 1,2, do 3: Compute ρ i 1 = ρ(x i 1 ) 4: if A ρ i 1 I is singular then 5: Solve (A ρ i 1 I)x i = 0 for unit vector x i 6: Break; 7: else 8: Solve (A ρ i 1 I)y i = x i 1 9: end if 10: Compute x i = y i / y i 11: if y i > 1/ε 0 then 12: Break 13: end if 14: end for The analysis of Algorithm 23 is quite delicate and the result is given by a theorem by Kahan [3]: Theorem 21 (Kahan) Let {x i } be the Rayleigh sequence generated by any unit vector x 0, then as i : 1 {ρ i } converges, and either 2 (ρ i,x i ) (λ,x) cubically, where Ax = λx, or 3 x 2i x + and x 2i+1 x linearly, where x + and x are the bisectors of a pair of eigenvectors whose eigenvalues have mean ρ = lim i ρ i The situation (3) is not stable under perturbations of x i The analysis of the RQI is difficult due to the non-stationary iteration nature the shift ρ i is different from iteration to iteration But there are some preliminary analysis that we can do Local convergence If we observe that the Rayleigh sequence converges to an vector z, immediately there is ρ i λ = ρ(z) and: Az = λz, hence (ρ i,x i ) converges to an eigenpair (λ,z) of A In this case, the convergence occurs at cubic rate Especially, let the angle between x i and z be dentoed by φ i : so that we can write x i as: φ i = arccos(x i z), x i = zcosφ i +u i sinφ i, where u i is a unit vector in the plane span(x i,z) and it is orthogonal to z, see Figure 1 8

9 u i x i φ i z Figure 1: Representing x i using z and u i If φ i = 0, we just choose u i as any unit vector that is orthogonal to z According to the algorithm, ρ i is not an eigenvalue of A hence we have: (A ρ i I)z = (λ ρ i )z (A ρ i I) 1 z = 1 λ ρ i z y i+1 = (A ρ i I) 1 x i = cosφ i λ ρ i z +sinφ i (A ρ i I) 1 u i Note that: z t (A ρ i I) 1 u i = u t i(a ρ i I) 1 z = 1 λ ρ i u t iz = 0, we see the second part is parallel to u i+1 In particular, by y i+1 = y i+1 x i+1 = y i+1 (zcosφ i+1 + u i+1 sinφ i+1 ) we have: So that: 1 cosφ i+1 = cosφ i y i+1, λ ρ i 1 sinφ i+1 = y i+1 sinφ i (A ρi I) 1 u i tanφ i+1 = tanφ i (λ ρ i ) (A ρ i I) 1 u i (21) The term λ ρ i is expected to be small; actually we can compute: λ ρ i = λ x t iax i = λ (zcosφ i +u i sinφ i ) t A(zcosφ i +u i sinφ i ) = λ cos 2 φ i λ sin 2 φ i ρ(u i ) Hence we continue (21) to obtain: tanφ i+1 = tanφ i sin 2 φ i (λ ρ(u i )) (A ρ i I) 1 u i tanφ i sin 2 1 φ i (λ ρ(u i )) min λj λ λ j ρ i (22) Here the inequality comes from the fact that u i E λ Following (22) and the hypothesis that x i z or equivalently φ i 0, we see for large enough i, φ i+1 O(φ 3 i ), because λ ρ(u i) is bounded from above by 2λ n and min λj λ λ j ρ i is asymptotically bounded from below by the distance between λ and the nearest eigenvalue of A to λ Note that the preceding estimate does not assume anything about the multiplicity of λ 9

10 Finally, it is not difficult to use the Taylor series expansion of cos to see that for small φ i : x i z 2 = (x i z) t (x i z) = 2 2cos(φ i ) φ 2 i, hence φ i+1 O(φ 3 i ) indicates cubic convergence of x i to the eigenvector z This is what makes the RQI method attractive: If the Rayleigh sequence converges, it converges very fast (usually in only a few iterations for practical purposes) Hence the typical cost of RQI is on the same scale of the linear solve of A, which is at most cubic in n Measure of accuracy To show the part of Kahan s theorem that the Rayleigh sequence will almost always converge, we need a measure of the iterates that typically decay as the loops continue One useful measure is given by the residual vector r i =(A ρ i I)x i And we can show that: r i+1 r i and the equality holds if and only if ρ i = ρ i+1 and x i is an eigenvalue of (A ρ i I) 2 To prove this claim, we first note that ρ(x) solve the minimization problem: min µ (A µi)x 2 This is not difficult to see because the target function is quadratic in µ Now we compute: r i+1 = (A ρ i+1 I)x i+1 (A ρ i I)x i+1 Because (A ρ i I)x i+1 = y i+1 1 (A ρi I)y i+1 is a multiple of x i, we have (A ρ i I)x i+1 = x t i (A ρ ii)x i+1 Thus: r i+1 x t i (A ρ i I)x i+1 xi+1 (A ρ i I)x i = r i, where we used the Cauchy-Schwartz inequality To see when the equality hold, we need first ρ i+1 =ρ i for ρ i to solve the minization problem before, and then we need x i+1 to be parallel to (A ρ i I)x i The latter condition is equivalent to say for some α 0: hence x i is an eigenvalue of (A ρ i I) 2 α (A ρ i I)x i = αx i+1 = y i+1 (A ρ i I) 1 x i, Global convergence The last missing part of Kahan s theorem is to show that ρ i will always converge regardless of the initial guess x 0, and discuss the behavior of the Rayleigh sequence By the choice of our measure r i and the previosu results, as i there is τ 0 such that: r i τ as i Since all pairs (ρ i,x i ) are confined to the compact set [ ρ(a),ρ(a)] S n 1 where ρ(a) is the spectral radius of A and S n 1 is the surface of the unit sphere of R n, they must have an accumulation point (ˆρ,ˆx) Now we consider two cases If τ = 0, we take the limit of the subsequence of (ρ i,x i ) that converges to (ˆρ,ˆx) to obtain: (A ˆρI)ˆx 0, hence ˆx is an eigenvector of the eigenvalue ˆρ of A Now we revisit the proof in the local convergence part, if it is not know a priori that (λ,z) is the limit of (ρ i,x i ) but merely some eigenpair of A, we 10

11 can follow the same analysis to show that: If for some j such that (ρ j,x j ) get close enough to (λ,z), then the rest of the sequence can do nothing but converge to (λ,z) This argument fits perfectly here to indicate that once we have a subsequence of (ρ i,x i ) converges to the eigenpair (ˆρ,ˆx), the whole sequence must also converge to it and the rate is cubic The second case τ > 0 is more complicated and it involves several steps that we ll just describe briefly First, r i+1 / r i τ/τ = 1 and follow the same argument previously we can see: ρ i+1 ρ i 0 This typically does not imply the convergence of ρ i but it leads to the following convergent results: [ ] (A ρ i I) 2 r i 2 (r i x i+1 )I x i 0 and ri x i+1 1 Combining the two, any accumulation point ˆρ of the sequence {ρ i } must satisfy det [ (A ˆρI) 2 τ 2 I ] = 0, regardless of the convergence of the Rayleigh sequence Thus ˆρ = λ i ±τ for some eigenvalue λ i of A this implies that there are at most 2n possible values for ˆρ and ρ i+1 ρ i 0 implies that {ρ i } converges to the unique accumulation point ˆρ Turning the attention to the Rayleigh sequence, any accumulation point ˆx of {x i } must be an eigenvalue of (A ˆρI) 2 without being an eigenvalue of A, and the only way it can happen is exactly as described in the third bullet of Theorem 21 The instability of x + and x comes from a fact that both of them are saddle points of the residual norm r It is in general difficult to tell which eigenvalue the RQI method will converge to by just looking at x 0 However, no matter what solution the method yields, the rest eigenvalues and eigenvectors can be found by the technique called deflation Deflation essentially means that we trade one eigenvalue of A by another one For example, if (λ,v) is an eigenpair of A and v = 1, we can compute: B = A λvv t, (23) which trades the eigenvalue 0 for λ of A (Exercise 3) The deflation technique can be combined with the shifted power method or the RQI method to find all the eigenvalues of A In a last comment, the major burden of the inverse iteration method in general and the RQI method in particular is the linear solve with A In practice, it is always beneficial to first reduce A to a similar matrix that is easier to invert For example, as we will learn from the next lecture, although it is not possible to use Givens rotations to put A in diagonal form, we can always step back and transform A into a tridiagonal matrix Then we can apply the iterative methods in this section to the resulting tridiagonal matrix and achieve much faster computations 3 Subspace Approximations The subtraction (23) is not the only deflation technique For example, if we construct an orthogonal matrix P that has the first column being v, then: [ ] P t λ 0 AP = 0 A (1) 11

12 and we reduce the size of the problem from n to n 1 This is a special case belonging to a much larger category of methods, called the subspace methods Particularly, we call a subspace V R n an invariant subspace of A if AV V Clearly, any space that is spanned by a collection of eigenvectors of A is an invariant subspace of A Motivated by this observation, people construct lower-dimensional subspaces that are almost invariant under the operation of A, and develop algorithms to find eigenvectors of A based on these subspaces For example, the Krylov subspaces can be utilized to construct eigenvalue algorithms for very large systems [3, 4], where factorization or the inverse iteration method (which requires solving a linear system with A) is not realistic Exercises Exercise 1 Let A = vv t be a rank one matrix, where v 0 Show that A has only two eigenvalues and find the eigenspaces for these two eigenvalues Exercise 2 Let A be any real symmetric matrix with eigenvalues λ 1 < λ 2 λ 3 λ n 1 < λ n How do you choose the shifted value µ so that when the initial guess is not too bad, the shifted power method will converge to an eigenvector of: (1) λ 1 of A, and (2) λ n of A What is the worst scenario convergence rate in both cases (see the discussion below Algorithm 21) Exercise 3 Let A be real symmetric and λ be an eigenvalue Suppose v satisfies v = 1 and Av = λv, show that: (1) the matrix B = A λvv t has a spectrum that differs from the spectrum of A only in one element the former has 0 whereas the latter has λ; and (2) both A and B can be diagonalized by the same orthonormal basis, that is there exists an orthogonal matrix Q such that both Q t AQ and Q t BQ are diagonal References [1] Roger A Horn and Charles R Johnson Matrix Analysis Cambridge University Press, 2nd edition, 2012 [2] Tosio Kato Perturbation Theory for Linear Operators Classics in Mathematics Springer International Publishing, 2nd edition, edition [3] Beresford N Parlett The Symmetric Eigenvalue Problem, volume 20 of Classics in Applied Mathematics Society for Industrial and Applied Mathematics, Philadelphia, PA, 1998 Reprinted from Prentice-Hall series in Computational Mathematics, 1980 [4] Yousef Saad Numerical Methods for Large Eigenvalue Problems Society for Industrial and Applied Mathematics, 2nd edition,

Lecture Note 12: The Eigenvalue Problem

Lecture Note 12: The Eigenvalue Problem MATH 5330: Computational Methods of Linear Algebra Lecture Note 12: The Eigenvalue Problem 1 Theoretical Background Xianyi Zeng Department of Mathematical Sciences, UTEP The eigenvalue problem is a classical

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Power iteration Notes for 2016-10-17 In most introductory linear algebra classes, one computes eigenvalues as roots of a characteristic polynomial. For most problems, this is a bad idea: the roots of

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today

Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today AM 205: lecture 22 Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today Posted online at 5 PM on Thursday 13th Deadline at 5 PM on Friday 14th Covers material up to and including

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Lecture 7 Spectral methods

Lecture 7 Spectral methods CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional

More information

1 Review: symmetric matrices, their eigenvalues and eigenvectors

1 Review: symmetric matrices, their eigenvalues and eigenvectors Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

ORIE 6300 Mathematical Programming I August 25, Recitation 1

ORIE 6300 Mathematical Programming I August 25, Recitation 1 ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Calvin Wylie Recitation 1 Scribe: Mateo Díaz 1 Linear Algebra Review 1 1.1 Independence, Spanning, and Dimension Definition 1 A (usually infinite)

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Linear Algebra Practice Final

Linear Algebra Practice Final . Let (a) First, Linear Algebra Practice Final Summer 3 3 A = 5 3 3 rref([a ) = 5 so if we let x 5 = t, then x 4 = t, x 3 =, x = t, and x = t, so that t t x = t = t t whence ker A = span(,,,, ) and a basis

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information