Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
|
|
- Karen Sutton
- 5 years ago
- Views:
Transcription
1 Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, use conjugate transpose, A H, instead of usual transpose, A T 2
2 Eigenvalues and Eigenvectors Standard eigenvalue problem: Given n n matrix A, find scalar λ and nonzero vector x such that Ax = λx λ is eigenvalue, and x is corresponding eigenvector Spectrum = λ(a) = set of eigenvalues of A Spectral radius = ρ(a) = max{ λ : λ λ(a)} 3
3 Geometric Interpretation Matrix expands or shrinks any vector lying in direction of eigenvector by scalar factor Expansion or contraction factor given by corresponding eigenvalue λ Eigenvalues and eigenvectors decompose complicated behavior of general linear transformation into simpler actions 4
4 Examples: Eigenvalues and Eigenvectors [ ] A = : 0 2 [ ] 1 λ 1 =1, x 1 =, λ 2 =2, x 2 = 0 [ ] A = : 0 2 [ ] 1 λ 1 =1, x 1 =, λ 2 =2, x 2 = 0 [ ] 0 1 [ ] 1 1 [ ] A = : 1 3 [ ] 1 λ 1 =2, x 1 =, λ 2 =4, x 2 = 1 [ ] 1 1 5
5 Examples Continued [ A = [ 1 λ 1 =2, x 1 = 1 ] ] :, λ 2 =1, x 2 = [ ] 1 1 [ ] A = : 1 0 [ 1 λ 1 = i, x 1 = i where i = 1 ], λ 2 = i, x 2 = [ ] i, 1 6
6 Characteristic Polynomial Equation Ax = λx equivalent to (A λi)x = o, which has nonzero solution x if, and only if, its matrix is singular Eigenvalues of A are roots λ i of characteristic polynomial in λ of degree n det(a λi) =0 Fundamental Theorem of Algebra implies that n n matrix A always has n eigenvalues, but they need be neither distinct nor real Complex eigenvalues of real matrix occur in complex conjugate pairs: if α+iβ is eigenvalue of real matrix, then so is α iβ, where i = 1 7
7 Example: Characteristic Polynomial Consider characteristic polynomial of previous example matrix: ([ ] [ ]) det λ = ([ 3 λ 1 det 1 3 λ ]) (3 λ)(3 λ) ( 1)( 1) = λ 2 6λ +8=0, so eigenvalues given by = or λ = 6 ± 36 32, 2 λ 1 =2, λ 2 =4 8
8 Companion Matrix Monic polynomial p(λ) =c 0 + c 1 λ + + c n 1 λ n 1 + λ n characteristic polynomial of companion matrix C n = c c c c n 1 Roots of polynomial of degree > 4 cannot always computed in finite number of steps So in general, computation of eigenvalues of matrices of order > 4 requires (theoretically infinite) iterative process 9
9 Characteristic Polynomial, cont Computing eigenvalues using characteristic polynomial not recommended because work in computing coefficients of characteristic polynomial coefficients of characteristic polynomial sensitive work in solving for roots of characteristic polynomial Characteristic polynomial powerful theoretical tool but usually not useful computationally 10
10 Example: Characteristic Polynomial Consider [ ] 1 ɛ A =, ɛ 1 where ɛ is positive number slightly smaller than ɛmach Exact eigenvalues of A are 1 + ɛ and 1 ɛ Computing characteristic polynomial in floatingpoint arithmetic, det(a λi) =λ 2 2λ +(1 ɛ 2 )=λ 2 2λ +1, which has 1 as double root Thus, cannot resolve eigenvalues by this method even though they are distinct in working precision 11
11 Multiplicity and Diagonalizability Multiplicity is number of times root appears when polynomial written as product of linear factors Eigenvalue of multiplicity 1 is simple Defective matrix has eigenvalue of multiplicity k>1 with fewer than k linearly independent corresponding eigenvectors Nondefective matrix A has n linearly independent eigenvectors, so is diagonalizable: X 1 AX = D, where X is nonsingular matrix of eigenvectors 12
12 Eigenspaces and Invariant Subspaces Eigenvectors can be scaled arbitrarily: if Ax = λx, then A(γx) =λ(γx) for any scalar γ, so γx is also eigenvector corresponding to λ Eigenvectors usually normalized by requiring some norm of eigenvector to be 1 Eigenspace = S λ = {x : Ax = λx} Subspace S of R n (or C n ) invariant if AS S For eigenvectors x 1 x p, span([x 1 x p ]) is invariant subspace 13
13 Properties of Matrices Matrix properties relevant to eigenvalue problems Property Definition Diagonal a ij = 0 for i j Tridiagonal a ij = 0 for i j > 1 Triangular a ij = 0 for i>j (upper) a ij = 0 for i<j (lower) Hessenberg a ij = 0 for i>j+ 1 (upper) a ij = 0 for i<j 1 (lower) Orthogonal Unitary Symmetric Hermitian Normal A T A = AA T = I A H A = AA H = I A = A T A = A H A H A = AA H 14
14 Examples: Matrix Properties Transpose: [ ] T 1 2 = 3 4 [ ] Conjugate transpose: [ 1+i 1+2i 2 i 2 2i ] H = [ 1 i 2+i ] 1 2i 2+2i Symmetric: [ ] Nonsymmetric: [ ]
15 Examples Continued Hermitian: [ 1 1+i ] 1 i 2 NonHermitian: [ 1 1+i 1+i 2 ] Orthogonal: [ ] 0 1, 1 0 [ ] 1 0, 0 1 [ ] 2/2 2/2 2/2 2/2 Unitary: [ ] i 2/2 2/2 2/2 i 2/2 Nonorthogonal: [ ]
16 Examples Continued Normal: Nonnormal: [ ]
17 Properties of Eigenvalue Problems Properties of eigenvalue problem affecting choice of algorithm and software Are all of eigenvalues needed, or only a few? Are only eigenvalues needed, or are corresponding eigenvectors also needed? Is matrix real or complex? Is matrix relatively small and dense, or large and sparse? Does matrix have any special properties, such as symmetry, or is it general matrix? 18
18 Conditioning of Eigenvalue Problems Condition of eigenvalue problem is sensitivity of eigenvalues and eigenvectors to changes in matrix Conditioning of eigenvalue problem not same as conditioning of solution to linear system for same matrix Different eigenvalues and eigenvectors not necessarily equally sensitive to perturbations in matrix 19
19 Conditioning of Eigenvalues If µ is eigenvalue of perturbation A+E of nondefective matrix A, then µ λ k cond 2 (X) E 2, where λ k is closest eigenvalue of A to µ and X is nonsingular matrix of eigenvectors of A Absolute condition number of eigenvalues is condition number of matrix of eigenvectors with respect to solving linear equations Eigenvalues may be sensitive if eigenvectors nearly linearly dependent (ie, matrix nearly defective) For normal matrix (A H A = AA H ), eigenvectors orthogonal, so eigenvalues well-conditioned 20
20 Conditioning of Eigenvalue If (A + E)(x + x) =(λ + λ)(x + x), where λ is simple eigenvalue of A, then λ y 2 x 2 y H x E 2 = 1 cos(θ) E 2, where x and y are corresponding right and left eigenvectors and θ is angle between them For symmetric or Hermitian matrix, right and left eigenvectors are same, so cos(θ) = 1 and eigenvalues inherently well-conditioned Eigenvalues of nonnormal matrices may be sensitive For multiple or closely clustered eigenvalues, corresponding eigenvectors may be sensitive 21
21 Problem Transformations Shift: IfAx = λx and σ is any scalar, then (A σi)x =(λ σ)x, so eigenvalues of shifted matrix are shifted eigenvalues of original matrix Inversion: IfA is nonsingular and Ax = λx with x o, then λ 0 and A 1 x =(1/λ)x, so eigenvalues of inverse are reciprocals of eigenvalues of original matrix Powers: If Ax = λx, then A k x = λ k x, so eigenvalues of power of matrix are same power of eigenvalues of original matrix Polynomial: IfAx = λx and p(t) is polynomial, then p(a)x = p(λ)x, so eigenvalues of polynomial in matrix are values of polynomial evaluated at eigenvalues of original matrix 22
22 Similarity Transformation B is similar to A if there is nonsingular T such that Then B = T 1 AT By = λy T 1 AT y = λy A(Ty)=λ(Ty), so A and B have same eigenvalues, and if y is eigenvector of B, then x = Ty is eigenvector of A Similarity transformations preserve eigenvalues and eigenvectors easily recovered 23
23 Example: Similarity Transformation From eigenvalues and eigenvectors for previous example, [ ][ ] = [ ][ ] , and hence [ ][ ][ ] = [ ] 2 0, 0 4 so original matrix is similar to diagonal matrix, and eigenvectors form columns of similarity transformation matrix 24
24 Diagonal Form Eigenvalues of diagonal matrix are diagonal entries, and eigenvectors are columns of identity matrix Diagonal form desirable in simplifying eigenvalue problems for general matrices by similarity transformations But not all matrices diagonalizable by similarity transformation Closest can get, in general, is Jordan form, which is nearly diagonal but may have some nonzero entries on first superdiagonal, corresponding to one or more multiple eigenvalues 25
25 Triangular Form Any matrix can be transformed into triangular (Schur ) form by similarity, and eigenvalues of triangular matrix are diagonal entries Eigenvectors of triangular matrix less obvious, but still straightforward to compute If U 11 u U 13 A λi = o 0 v T O o U 33 is triangular, then U 11 y = u can be solved for y, so that x = y 1 o is corresponding eigenvector 26
26 Block Triangular Form If A = A 11 A 12 A 1p A 22 A 2p A pp with square diagonal blocks, then λ(a) = p j=1 λ(a jj ),, so eigenvalue problem breaks into p smaller eigenvalue problems Real Schur form has 1 1 diagonal blocks corresponding to real eigenvalues and 2 2 diagonal blocks corresponding to pairs of complex conjugate eigenvalues 27
27 Forms Attainable by Similarity A T B distinct eigenvalues nonsingular diagonal real symmetric orthogonal real diagonal complex Hermitian unitary real diagonal normal unitary diagonal arbitrary real orthogonal real block triangular (real Schur) arbitrary unitary upper triangular (Schur) arbitrary nonsingular almost diagonal (Jordan) Given matrix A with indicated property, matrices B and T exist with indicated properties such that B = T 1 AT When B is diagonal or triangular, its diagonal entries are eigenvalues When B is diagonal, columns of T are eigenvectors 28
28 Power Iteration Simplest method for computing one eigenvalueeigenvector pair is power iteration, which repeatedly multiplies matrix times initial starting vector Assume A has unique eigenvalue of maximum modulus, say λ 1, with corresponding eigenvector v 1 Then, starting from nonzero vector x 0, iteration scheme x k = Ax k 1 converges to multiple of eigenvector v 1 corresponding to dominant eigenvalue λ 1 29
29 Convergence of Power Iteration To see why power iteration converges to dominant eigenvector, express starting vector x 0 as linear combination x 0 = n i=1 α i v i, where v i are eigenvectors of A Then x k = Ax k 1 = A 2 x k 2 = = A k x 0 = n i=1 λ k i α iv i = λ k 1 α 1 v 1 + n i=2 (λ i /λ 1 ) k α i v i Since λ i /λ 1 < 1fori>1, successively higher powers go to zero, leaving only component corresponding to v 1 30
30 Example: Power Iteration Ratio of values of given component of x k from one iteration to next converges to dominant eigenvalue λ 1 For example, if A = we obtain sequence [ 15 ] and x 0 = [ ] 0, 1 k x T k ratio Ratio for each component is converging to dominant eigenvalue, which is 2 31
31 Limitations of Power Iteration Power iteration can fail for various reasons: Starting vector may have no component in dominant eigenvector v 1 (ie, α 1 =0) not problem in practice because rounding error usually introduces such component in any case There may be more than one eigenvalue having same (maximum) modulus, in which case iteration may converge to linear combination of corresponding eigenvectors For real matrix and starting vector, iteration can never converge to complex vector 32
32 Normalized Power Iteration Geometric growth of components at each iteration risks eventual overflow (or underflow if λ 1 < 1) Approximate eigenvector should be normalized at each iteration, say, by requiring its largest component to be 1 in modulus, giving iteration scheme y k = Ax k 1, x k = y k / y k With normalization, y k λ 1, and x k v 1 / v 1 33
33 Example: Normalized Power Iteration Repeating previous example with normalized scheme, k x T k y k
34 Geometric Interpretation Behavior of power iteration depicted geometrically: Initial vector x 0 x 1 x 2 x 3 x 4 v 2 v x 0 = [ ] 0 1 =1 [ ] [ ] contains equal components in two eigenvectors (shown by dashed arrows) Repeated multiplication by A causes component in first eigenvector (corresponding to larger eigenvalue, 2) to dominate, and hence sequence of vectors converges to that eigenvector 35
35 Power Iteration with Shift Convergence rate of power iteration depends on ratio λ 2 /λ 1, where λ 2 is eigenvalue having second largest modulus May be possible to choose shift, A σi, such that λ 2 σ λ 1 σ < λ 2 λ 1, so convergence is accelerated Shift must then be added to result to obtain eigenvalue of original matrix 36
36 Example: Power Iteration with Shift In earlier example, for instance, if we pick shift of σ = 1, (which is equal to other eigenvalue) then ratio becomes zero and method converges in one iteration In general, we would not be able to make such fortuitous choice, but shifts can still be extremely useful in some contexts, as we will see later 37
37 Inverse Iteration If smallest eigenvalue of matrix required rather than largest, can make use of fact that eigenvalues of A 1 are reciprocals of those of A, so smallest eigenvalue of A is reciprocal of largest eigenvalue of A 1 This leads to inverse iteration scheme Ay k = x k 1, x k = y k / y k, which is equivalent to power iteration applied to A 1 Inverse of A not computed explicitly, but factorization of A used to solve system of linear equations at each iteration 38
38 Inverse Iteration, continued Inverse iteration converges to eigenvector corresponding to smallest eigenvalue of A Eigenvalue obtained is dominant eigenvalue of A 1, and hence its reciprocal is smallest eigenvalue of A in modulus 39
39 Example: Inverse Iteration Applying inverse iteration to previous example to compute smallest eigenvalue yields sequence k x T k y k which is indeed converging to 1 (which is its own reciprocal in this case) 40
40 Inverse Iteration with Shift As before, shifting strategy, working with A σi for some scalar σ, can greatly improve convergence Inverse iteration particularly useful for computing eigenvector corresponding to approximate eigenvalue, since it converges rapidly when applied to shifted matrix A λi, where λ is approximate eigenvalue Inverse iteration also useful for computing eigenvalue closest to given value β, since if β is used as shift, then desired eigenvalue corresponds to smallest eigenvalue of shifted matrix 41
41 Rayleigh Quotient Given approximate eigenvector x for real matrix A, determining best estimate for corresponding eigenvalue λ can be considered as n 1 linear least squares approximation problem xλ = Ax From normal equation x T xλ = x T Ax, least squares solution is given by λ = xt Ax x T x This quantity, known as Rayleigh quotient, has many useful properties 42
42 Example: Rayleigh Quotient Rayleigh quotient can accelerate convergence of iterative methods such as power iteration, since Rayleigh quotient x T k Ax k/x T k x k gives better approximation to eigenvalue at iteration k than does basic method alone For previous example using power iteration, value of Rayleigh quotient at each iteration is shown below k x T k y k x T k Ax k/x T k x k
43 Rayleigh Quotient Iteration Given approximate eigenvector, Rayleigh quotient yields good estimate for corresponding eigenvalue Conversely, inverse iteration converges rapidly to eigenvector if approximate eigenvalue is used as shift, with one iteration often sufficing These two ideas combined in Rayleigh quotient iteration σ k = x T k Ax k/x T k x k, (A σ k I)y k+1 = x k, x k+1 = y k+1 / y k+1, starting from given nonzero vector x 0 Scheme especially effective for symmetric matrices and usually converges very rapidly 44
44 Rayleigh Quotient Iteration, cont Using different shift at each iteration means matrix must be refactored each time to solve linear system, so cost per iteration is high unless matrix has special form that makes factorization easy Same idea also works for complex matrices, for which transpose is replaced by conjugate transpose, so Rayleigh quotient becomes x H Ax/x H x 45
45 Example: Rayleigh Quotient Iteration Using same matrix as previous examples and randomly chosen starting vector x 0, Rayleigh quotient iteration converges in two iterations: k x T k σ k
46 Deflation After eigenvalue λ 1 and corresponding eigenvector x 1 have been computed, then additional eigenvalues λ 2,,λ n of A can be computed by deflation, which effectively removes known eigenvalue Let H be any nonsingular matrix such that Hx 1 = αe 1, scalar multiple of first column of identity matrix (Householder transformation good choice for H) Then similarity transformation determined by H transforms A into form [ λ1 b T HAH 1 =, o B where B is matrix of order n 1 having eigenvalues λ 2,,λ n ] 47
47 Deflation, continued Thus, we can work with B to compute next eigenvalue λ 2 Moreover, if y 2 is eigenvector of B corresponding to λ 2, then [ ] x 2 = H 1 α, where α = bt y 2, y 2 λ 2 λ 1 is eigenvector corresponding to λ 2 for original matrix A, provided λ 1 λ 2 Process can be repeated to find additional eigenvalues and eigenvectors 48
48 Deflation, continued Alternative approach lets u 1 be any vector such that u T 1 x 1 = λ 1 Then A x 1 u T 1 has eigenvalues 0,λ 2,,λ n Possible choices for u 1 include u 1 = λ 1 x 1,ifA symmetric and x 1 normalized so that x 1 2 =1, u 1 = λ 1 y 1, where y 1 is corresponding left eigenvector (ie, A T y 1 = λ 1 y 1 ) normalized so that y T 1 x 1 =1, u 1 = A T e k,ifx 1 normalized so that x 1 = 1 and kth component of x 1 is 1 49
49 Simultaneous Iteration Simplest method for computing many eigenvalueeigenvector pairs is simultaneous iteration, which repeatedly multiplies matrix times matrix of initial starting vectors Starting from n p matrix X 0 of rank p, iteration scheme is X k = AX k 1 span(x k ) converges to invariant subspace determined by p largest eigenvalues of A, provided λ p > λ p+1 Also called subspace iteration 50
50 Orthogonal Iteration As with power iteration, normalization needed with simultaneous iteration Each column of X k converges to dominant eigenvector, so columns of X k become increasingly ill-conditioned basis for span(x k ) Both issues addressed by computing QR factorization at each iteration: ˆQ k R k = X k 1, X k = A ˆQ k where ˆQ k R k is reduced QR factorization of X k 1 Iteration converges to block triangular form, and leading block is triangular if moduli of consecutive eigenvalues are distinct 51
51 QR Iteration For p = n and X 0 = I, matrices A k = ˆQ H k A ˆQ k, generated by orthogonal iteration converge to triangular or block triangular form, yielding all eigenvalues of A QR iteration computes successive matrices A k without forming above product explicitly Starting with A 0 = A, at iteration k compute QR factorization Q k R k = A k 1 and form reverse product Since A k = R k Q k A k = R k Q k = Q H k A k 1Q k, successive matrices A k unitarily similar to each other 52
52 QR Iteration, continued Diagonal entries (or eigenvalues of diagonal blocks) of A k converge to eigenvalues of A Product of orthogonal matrices Q k converges to matrix of corresponding eigenvectors If A symmetric, then symmetry preserved by QR iteration, so A k converge to matrix that is both triangular and symmetric, hence diagonal 53
53 Example: QR Iteration Let A 0 = [ ] Compute QR factorization A 0 = Q 1 R 1 = [ and form reverse product [ A 1 = R 1 Q 1 = ][ ] ] Off-diagonal entries now smaller, and diagonal entries closer to eigenvalues, 8 and 3 Process continues until matrix within tolerance of being diagonal, and diagonal entries then closely approximate eigenvalues 54
54 QR Iteration with Shifts Convergence rate of QR iteration accelerated by incorporating shifts: Q k R k = A k 1 σ k I, A k = R k Q k + σ k I, where σ k is rough approximation to eigenvalue Good shift can be determined by computing eigenvalues of 2 2 submatrix in lower right corner of matrix 55
55 Example: QR Iteration with Shifts Repeat previous example, but with shift of σ 0 = 4, which is lower right corner entry of matrix We compute QR factorization A 0 σ 1 I = Q 1 R 1 = [ ][ ] and form reverse product, adding back shift to obtain [ ] A 1 = R 1 Q 1 + σ 1 I = After one iteration, off-diagonal entries smaller compared with unshifted algorithm, and eigenvalues closer approximations to eigenvalues 56
56 Preliminary Reduction Efficiency of QR iteration enhanced by first transforming matrix as close to triangular form as possible before beginning iterations Hessenberg matrix is triangular except for one additional nonzero diagonal immediately adjacent to main diagonal Any matrix can be reduced to Hessenberg form in finite number of steps by orthogonal similarity transformation, for example using Householder transformations Symmetric Hessenberg matrix is tridiagonal Hessenberg or tridiagonal form preserved during successive QR iterations 57
57 Preliminary Reduction, continued Advantages of initial reduction to upper Hessenberg or tridiagonal form: Work per QR iteration reduced from O(n 3 ) to O(n 2 ) for general matrix or O(n) for symmetric matrix Fewer QR iterations required because matrix nearly triangular (or diagonal) already If any zero entries on first subdiagonal, then matrix block triangular and problem can be broken into two or more smaller subproblems 58
58 Preliminary Reduction, continued QR iteration implemented in two-stages: symmetric tridiagonal diagonal or general Hessenberg triangular Preliminary reduction requires definite number of steps, whereas subsequent iterative stage continues until convergence In practice only modest number of iterations usually required, so much of work is in preliminary reduction Cost of accumulating eigenvectors, if needed, dominates total cost 59
59 Cost of QR Iteration Approximate overall cost of preliminary reduction and QR iteration, counting both additions and multiplications: Symmetric matrices: 4 3 n3 for eigenvalues only 9n 3 for eigenvalues and eigenvectors General matrices: 10n 3 for eigenvalues only 25n 3 for eigenvalues and eigenvectors 60
60 Krylov Subspace Methods Krylov subspace methods reduce matrix to Hessenberg (or tridiagonal) form using only matrixvector multiplication For arbitrary starting vector x 0,if then K k = [ x 0 Ax 0 A k 1 x 0 ], Kn 1 AK n = C n, where C n is upper Hessenberg (in fact, companion matrix) To obtain better conditioned basis for span(k n ), compute QR factorization so that Q n R n = K n, Q H n AQ n = R n C n Rn 1 H, with H upper Hessenberg 61
61 Krylov Subspace Methods Equating kth columns on each side of equation AQ n = Q n H yields recurrence Aq k = h 1k q h kk q k + h k+1,k q k+1 relating q k+1 to preceding vectors q 1,,q k Premultiplying by q H j and using orthonormality, h jk = q H j Aq k, j =1,,k These relationships yield Arnoldi iteration, which produces unitary matrix Q n and upper Hessenberg matrix H n column by column using only matrix-vector multiplication by A and inner products of vectors 62
62 Arnoldi Iteration x 0 = arbitrary nonzero starting vector q 1 = x 0 / x 0 2 for k =1, 2, u k = Aq k for j =1to k h jk = q H j u k u k = u k h jk q j end h k+1,k = u k 2 if h k+1,k =0then stop q k+1 = u k /h k+1,k end 63
63 Arnoldi Iteration, continued If Q k = [ q 1 q k ], then H k = Q H k AQ k is upper Hessenberg matrix Eigenvalues of H k, called Ritz values, are approximate eigenvalues of A, and Ritz vectors given by Q k y, where y is eigenvector of H k, are corresponding approximate eigenvectors of A Eigenvalues of H k must be computed by another method, such as QR iteration, but this is easier problem if k n 64
64 Arnoldi Iteration, continued Arnoldi iteration fairly expensive in work and storage because each new vector q k must be orthogonalized against all previous columns of Q k, and all must be stored for that purpose So Arnoldi process usually restarted periodically with carefully chosen starting vector Ritz values and vectors produced are often good approximations to eigenvalues and eigenvectors of A after relatively few iterations 65
65 Lanczos Iteration Work and storage costs drop dramatically if matrix symmetric or Hermitian, since recurrence then has only three terms and H k is tridiagonal (so usually denoted T k ), yielding Lanczos iteration q 0 = o β 0 =0 x 0 = arbitrary nonzero starting vector q 1 = x 0 / x 0 2 for k =1, 2, u k = Aq k α k = q H k u k u k = u k β k 1 q k 1 α k q k β k = u k 2 if β k =0then stop q k+1 = u k /β k end 66
66 Lanczos Iteration, continued α k and β k are diagonal and subdiagonal entries of symmetric tridiagonal matrix T k As with Arnoldi, Lanczos iteration does not produce eigenvalues and eigenvectors directly, but only tridiagonal matrix T k, whose eigenvalues and eigenvectors must be computed by another method to obtain Ritz values and vectors If β k = 0, then algorithm appears to break down, but in that case invariant subspace has already been identified (ie, Ritz values and vectors are already exact at that point) 67
67 Lanczos Iteration, continued In principle, if Lanczos algorithm were run until k = n, resulting tridiagonal matrix would be orthogonally similar to A In practice, rounding error causes loss of orthogonality, invalidating this expectation Problem can be overcome by reorthogonalizing vectors as needed, but expense can be substantial Alternatively, can ignore problem, in which case algorithm still produces good eigenvalue approximations, but multiple copies of some eigenvalues may be generated 68
68 Krylov Subspace Methods, continued Great virtue of Arnoldi and Lanczos iterations is ability to produce good approximations to extreme eigenvalues for k n Moreover, they require only one matrix-vector multiplication by A per step and little auxiliary storage, so are ideally suited to large sparse matrices If eigenvalues are needed in middle of spectrum, say near σ, then algorithm can be applied to matrix (A σi) 1, assuming it is practical to solve systems of form (A σi)x = y 69
69 Example: Lanczos Iteration For symmetric matrix with eigenvalues 1,,29, behavior of Lanczos iteration is shown below i t e r a t i o n Ritz values 70
70 Jacobi Method One of oldest methods for computing eigenvalues is Jacobi method, which uses similarity transformation based on plane rotations Sequence of plane rotations chosen to annihilate symmetric pairs of matrix entries, eventually converging to diagonal form Choice of plane rotation slightly more complicated than in Givens method for QR factorization To annihilate given off-diagonal pair, choose c and s so that [ ][ ][ ] J T c s a b c s AJ = = s c b d s c [ c 2 a 2csb + s 2 d c 2 b + cs(a d) s 2 b is diagonal c 2 b + cs(a d) s 2 b c 2 d +2csb + s 2 a 71 ]
71 Jacobi Method, continued Transformed matrix diagonal if c 2 b + cs(a d) s 2 b =0 Dividing both sides by c 2 b, we obtain 1+ s (a d) s2 c b c 2 =0 Making substitution t = s/c, we get quadratic equation (a d) 1+t t 2 =0 b for tangent t of angle of rotation, from which we can recover c =1/( 1+t 2 ) and s = c t Advantageous numerically to use root of smaller magnitude 72
72 Example: Plane Rotation Consider 2 2 matrix A = [ ] Quadratic equation for tangent reduces to t 2 = 1, so t = ±1 Two roots of same magnitude, so we arbitrarily choose t = 1, which yields c = 1/ 2 and s = 1/ 2 Resulting plane rotation J gives J T AJ = [ ][ ][ 1/ 2 1/ / 2 1/ 2 1/ 2 1/ / 2 1/ 2 = [ ] ] 73
73 Jacobi Method, continued Starting with symmetric matrix A 0 = A, each iteration has form A k+1 = J T k A kj k, where J k is plane rotation chosen to annihilate a symmetric pair of entries in A k Plane rotations repeatedly applied from both sides in systematic sweeps through matrix until off-diagonal mass of matrix reduced to within some tolerance of zero Resulting diagonal matrix orthogonally similar to original matrix, so diagonal entries are eigenvalues, and eigenvectors given by product of plane rotations 74
74 Jacobi Method, continued Jacobi method reliable, simple to program, and capable of high accuracy, but converges rather slowly and difficult to generalize beyond symmetric matrices Except for small problems, more modern methods usually require 5 to 10 times less work than Jacobi One source of inefficiency is that previously annihilated entries can subsequently become nonzero again, thereby requiring repeated annihilation Newer methods such as QR iteration preserve zero entries introduced into matrix 75
75 Let A 0 = Example: Jacobi Method First annihilate (1,3) and (3,1) entries using rotation to obtain J 0 = A 1 = J T 0 A 0J 0 =
76 Example Continued Next annihilate (1,2) and (2,1) entries using rotation to obtain J 1 = A 2 = J T 1 A 1J 1 = Next annihilate (2,3) and (3,2) entries using rotation to obtain J 2 = A 3 = J T 2 A 2J 2 =
77 Example Continued Beginning new sweep, again annihilate (1,3) and (3,1) entries using rotation to obtain J 3 = A 4 = J T 3 A 3J 3 = Process continues until off-diagonal entries reduced to as small as desired Result is diagonal matrix orthogonally similar to original matrix, with the orthogonal similarity transformation given by product of plane rotations 78
78 Bisection or Spectrum-Slicing For real symmetric matrix, can determine how many eigenvalues are less than given real number σ By systematically choosing various values for σ (slicing spectrum at σ) and monitoring resulting count, any eigenvalue can be isolated as accurately as desired For example, symmetric indefinite factorization A = LDL T makes inertia (numbers of positive, negative, and zero eigenvalues) of symmetric matrix A easy to determine By applying factorization to matrix A σi for various values of σ, individual eigenvalues can be isolated as accurately as desired using interval bisection technique 79
79 Sturm Sequence Another spectrum-slicing method for computing individual eigenvalues is based on Sturm sequence property of symmetric matrices Let A be symmetric matrix and let p r (σ) denote determinant of leading principal minor of order r of A σi Then zeros of p r (σ) strictly separate those of p r 1 (σ), and number of agreements in sign of successive members of sequence p r (σ), for r = 1,,n, equals number of eigenvalues of A strictly greater than σ Determinants p r (σ) easy to compute if A transformed to tridiagonal form before applying Sturm sequence technique 80
80 Divide-and-Conquer Method Another method for computing eigenvalues and eigenvectors of real symmetric tridiagonal matrix is based on divide-and-conquer strategy Express symmetric tridiagonal matrix T as T = [ T1 O O T 2 ] + β uu T, Can now compute eigenvalues and eigenvectors of smaller matrices T 1 and T 2 To relate these back to the eigenvalues and eigenvectors of the original matrix requires solution of secular equation, which can be done reliably and efficiently Applying this approach recursively yields divideand-conquer algorithm for symmetric tridiagonal eigenproblems 81
81 Relatively Robust Representation With conventional methods, cost of computing eigenvalues of symmetric tridiagonal matrix is O(n 2 ), but if orthogonal eigenvectors are also computed, then cost rises to O(n 3 ) Another possibility is to compute eigenvalues first at O(n 2 ) cost, and then compute corresponding eigenvectors separately using inverse iteration with computed eigenvalues as shifts Key to making this idea work is computing eigenvalues and corresponding eigenvectors to very high relative accuracy so that expensive explicit orthogonalization of eigenvectors is not needed RRR algorithm exploits this approach to produce eigenvalues and orthogonal eigenvectors at O(n 2 ) cost 82
82 Generalized Eigenvalue Problems Generalized eigenvalue problem has form Ax = λbx, where A and B are given n n matrices If either A or B is nonsingular, then generalized eigenvalue problem can be converted to standard eigenvalue problem, either (B 1 A)x = λx or (A 1 B)x =(1/λ)x This is not recommended, since it may cause loss of accuracy due to rounding error loss of symmetry if A and B are symmetric Better alternative for generalized eigenvalue problems is QZ algorithm 83
83 QZ Algorithm If A and B are triangular, then eigenvalues are given by λ i = a ii /b ii,forb ii 0 QZ algorithm reduces A and B simultaneously to upper triangular form by orthogonal transformations First, B is reduced to upper triangular form by orthogonal transformation from left, which is also applied to A Next, transformed A is reduced to upper Hessenberg form by orthogonal transformation from left, while maintaining triangular form of B, which requires additional transformations from right Finally, analogous to QR iteration, A is reduced to triangular form while still maintaining triangular form of B, which again requires transformations from both sides 84
84 QZ Algorithm, continued Eigenvalues can now be determined from mutually triangular form, and eigenvectors can be recovered from products of left and right transformations, denoted by Q and Z 85
85 Computing SVD Singular values of A are nonnegative square roots of eigenvalues of A T A, and columns of U and V are orthonormal eigenvectors of AA T and A T A, respectively Algorithms for computing SVD work directly with A, however, without forming AA T or A T A, thereby avoiding loss of information associated with forming these matrix products explicitly SVD usually computed by variant of QR iteration, with A first reduced to bidiagonal form by orthogonal transformations, then remaining off-diagonal entries annihilated iteratively SVD can also be computed by variant of Jacobi method, which can be useful on parallel computers or if matrix has special structure 86
Scientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationLinear algebra & Numerical Analysis
Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationEigenvalues and Eigenvectors
Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationThe German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.
Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationNumerical Solution of Linear Eigenvalue Problems
Numerical Solution of Linear Eigenvalue Problems Jessica Bosch and Chen Greif Abstract We review numerical methods for computing eigenvalues of matrices We start by considering the computation of the dominant
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationOrthogonal iteration to QR
Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationEIGENVALUE PROBLEMS (EVP)
EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More information13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationarxiv: v1 [math.na] 5 May 2011
ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and
More informationToday: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today
AM 205: lecture 22 Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today Posted online at 5 PM on Thursday 13th Deadline at 5 PM on Friday 14th Covers material up to and including
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationKrylov Subspaces. Lab 1. The Arnoldi Iteration
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationSynopsis of Numerical Linear Algebra
Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationCharles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems
Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationKrylov subspace projection methods
I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More information5 Selected Topics in Numerical Linear Algebra
5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More information11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)
Chapter 11. 11.0 Introduction Eigensystems An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if A x = λx (11.0.1) Obviously any multiple of an eigenvector x will also be an
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationKrylov Subspaces. The order-n Krylov subspace of A generated by x is
Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationKrylov Subspace Methods for Large/Sparse Eigenvalue Problems
Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More information