Krylov subspace projection methods
|
|
- Philip Briggs
- 5 years ago
- Views:
Transcription
1 I.1.(a) Krylov subspace projection methods
2 Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks an approximate eigenpair ( λ,ũ) with λ C and ũ K. This approximate eigenpair is obtained by imposing the following Galerkin condition: Aũ λũ K, (1) or, equivalently, v H (Aũ λũ) = 0, v K. (2) In matrix form, assume that an orthonormal basis {v 1,v 2,...,v k } of K is available. Denote V = (v 1, v 2,...,v k ), and let ũ = V y. Then, the condition (2) becomes v H j (AV y λv y) = 0, j = 1,...,k.
3 Therefore, y and λ must satisfy where B k y = λy, (3) B k = V H AV. Each eigenvalue λ i of B k is called a Ritz value, and V y i is called Ritz vector, where y i is the eigenvector of B k associated with λ i.
4 Rayleigh-Ritz procedure - orthogonal projection 1. Compute an orthonormal basis {v i } i=1,...,k of the subspace K. Let V = (v 1, v 2,...,v k ). 2. Compute B k = V H AV. 3. Compute the eigenvalues of B k and select k 0 desired ones: λ i, i = 1, 2,...,k 0, where k 0 k. 4. Compute the eigenvectors y i,i = 1,...,k 0, of B k associated with λ i,i = 1,...,k 0, and the corresponding approximate eigenvectors of A, ũ i = V y i, i = 1,...,k 0.
5 Oblique projection technique : framework Select two subspaces L and K and then seek an approximate eigenpair ( λ, ũ) with λ C and ũ K that satisfies the Petrov-Galerkin condition: v H (Aũ λũ) = 0, v L. (4) In matrix form, let V denote the basis for the subspace K and W for L. Then, writing ũ = V y, the Petrov-Galerkin condition (4) yields the reduced eigenvalue problem B k y = λc k y, where B k = W H AV and C k = W H V.
6 If C k = V H V = I, then the two bases are called biorthonormal. In order for a biorthonormal pair V and W to exist the following additional assumption for L and K must hold. For any two bases V and W of K and L, respectively, det(w H V ) 0. (5)
7 Rayleigh-Ritz procedure - oblique projection 1. Compute an orthonormal bases {v i } i=1,...,k of the subspace K. and {w i } i=1,...,k of the subspace L. Let V = (v 1, v 2,...,v k ) and W = (w 1, w 2,...,w k ). 2. Compute B k = W H AV and C k = W H V. 3. Compute the eigenvalues of B k λc k and select k 0 desired ones: λ i, i = 1, 2,...,k 0, where k 0 k. 4. Compute the right and left eigenvectors y i and z i, i = 1,...,k 0, of B k λc k associated with λ i, i = 1,...,k 0, and the corresponding approximate right and left eigenvectors of A, ũ i = V y i, and ṽ i = Wz i, i = 1,...,k 0.
8 Optimality Let Q = (Q k, Q u ) be an n-by-n orthogonal matrix, where Q k is n-by-k, and Q u is n-by-(n k), and span(q k ) = K. Then [ Q T = Q T T AQ = k AQ k Q T k AQ ] [ ] u Tk T Q T uaq k Q T uk uaq u T ku T u The Ritz values and Ritz vectors are considered optimal approximations to the eigenvalues and eigenvectors of A from the selected subsapce K = span(q k ) as justified by the follows. Theorem. min S,k k AQ k Q k S 2 = AQ k Q k T k 2
9 Krylov subspace K k+1 (A, u 0 ) = span{u 0,Au 0, A 2 u 0,...,A k u 0 } = {q(a)u 0 q P k }, where P k is the set of all polynomial of degree less than k + 1. Properties of K k+1 (A,u 0 ): 1. K k (A, u 0 ) K k+1 (A, u 0 ). AK k (A,u 0 ) K k+1 (A, u 0 ). 2. If σ 0, K k (A,u 0 ) = K k (σa, u 0 ) = K k (A, σu 0 ). 3. For any scalar κ, K k (A,u 0 ) = K k (A κi,u 0 ). 4. If W is nonsingular, K k (W 1 AW, W 1 u 0 ) = W 1 K k (A, u 0 ).
10 Arnoldi decomposition An explicit Krylov basis {u 0,Au 0, A 2 u 0,...,A k u 0 } is not suitable for numerical computing. It is extremely ill-conditioned. Therefore, our first task is to replace a Krylov basis with a better conditioned basis, say an orthonormal basis. Theorem. Let the columns of K j+1 = ( u 0 Au 0... A j u 0 ) be linearly independent. Let K j+1 = U j+1 R j+1 (6) be the QR factorization of K j+1. Then there is a (j + 1) j unreduced upper Hessenberg matrix Ĥj such that AU j = U j+1 Ĥ j. (7) Conversely, if U j+1 is orthonormal and satisfies (7), then span(u j+1 ) = span{u 0,Au 0,...,A j u 0 }. (8)
11 Proof: Partitioning the QR decomposition (6), we have ( ) ( ) ( ) Kj A j R u 0 = Uj u j r j+1 j+1, 0 r j+1,j+1 where K j = U j R j is the QR decomposition of K j. Then or AK j = AU j R j ( ) 0 AU j = AK j Rj 1 = K j+1 Rj 1 It is easy to verify that ( ) 0 Ĥ j = R j+1 Rj 1 ( ) 0 = U j+1 R j+1 Rj 1. is a (j + 1) j unreduced upper Hessenberg matrix. Therefore we complete the proof of (7). Conversely, suppose that U j+1 satisfies (7), then by induction, we can prove the identity (8).
12 : Arnoldi decomposition: by partitioning, ( ) Hj Ĥ j = h j+1,j e T, j the decomposition (7) can be written as follows: AU j = U j H j + h j+1,j u j+1 e T j. (9) We call (9) an Arnoldi decomposition of order j. The decomposition (7) is a compact form.
13 Arnoldi procedure By the Arnoldi decomposition (9), we deduce the following process to generate an orthogonormal basis {v 1,v 2,...,v m } of the Krylov subspace K m (A, v): Arnoldi Process: 1. v 1 = v/ v 2 2. for j = 1, 2,...,k 3. compute w = Av j 4. for i = 1, 2,...,j 5. h ij = vi T w 6. w = w h ij v i 7. end for 8. h j+1,j = w 2 9. If h j+1,j = 0, stop 10. v j+1 = w j /h j+1,j 11. endfor
14 Remarks: 1. The matrix A is only referenced via the matrix-vector multiplication Av j. Therefore, it is ideal for large scale matrices. Any sparsity or structure of a matrix can be exploited. 2. The main storage requirement is (m + 1)n for storing Arnoldi vectors {v i } 3. the cost of arithmetic is m matrix-vector products plus 2m 2 n for the rest. It is common that the matrix-vector multiplication is the dominant cost. 4. The Arnoldi procedure breaks down when h j+1,j = 0 for some j. It is easy to see that if the Arnoldi procedure breaks down at step j (i.e. h j+1,j = 0), then K j = span(v j ) is invariant subspace of A. 5. Some care must be taken to insure that the vectors v j remain orthogonal to working accuracy in the presence of rounding error. The usual technique is reorthogonalization.
15 Arnoldi decomposition Denote and H k = V k = ( v 1 v 2... v k ) h 11 h 12 h 1,k 1 h 1k h 21 h 22 h 2,k 1 h 2k h h 3,k 1 h 3k h k,k 1 h kk The Arnoldi process can be expressed in the following governing relations: and AV k = V k H k + h k+1,k v k+1 e T k (10) V H k V k = I and V H k v k+1 = 0.
16 The decomposition is uniquely determined by the starting vector v (the implicit Q-Theorem). Since V H k v k+1 = 0, we have H k = V T k AV k. Let µ be an eigenvalue of H k and y be a corresponding eigenvector y, i.e., H k y = µy, y 2 = 1. Then the corresponding Ritz pair is (µ, V k y). Applying y to the right hand side of (10), the residual vector for (µ, V k y) is given by A(V k y) µ(v k y) = h k+1,k v k+1 (e T ky).
17 Using the backward error interpretation, we know that (µ, V k y) is an exact eigenpair of A + E: where (A + E)(V k y) = µ(v k y), E 2 = h k+1,k e T ky. This gives us a criterion of whether to accept the Ritz pair (µ, V k y) as an accurate approximate eigenpair of A.
18 Arnoldi method = RR + Arnoldi 1. Choose a starting vector v; 2. Generate the Arnoldi decomposition of length k by the Arnoldi process; 3. Compute the Ritz pairs and decide which ones are acceptable; 4. If necessary, increase k and repeat.
19 An example A = sprandn(100,100,0.1) and v = (1, 1,...,1) T. + are the eigenvalues of matrix A are the eigenvalues of the upper Hessenberg matrix H 30
20 4 Eigenvalues of A in "+" and H 30 in "o" Imaginary Part Real Part Observation: exterior eigenvalues converge first, a typical convergence phenomenon.
21 The need of restarting The algorithm has two nice aspects: 1. H k is already in the Hessenberg form, so we can immediately apply the QR algorithm to find its eigenvalues. 2. After we increase k to, say k +p, we only have to orthogonalize p vectors to compute the (k + p)th Arnoldi decomposition. The work already completed previously is not thrown away. Unfortunately, the algorithm has its drawbacks, too: 1. If A is large, we cannot increase k indefinitely, since V k requires nk memory locations to store. 2. We have little control over which eigenpairs the algorithm finds.
22 Implicit restarting Goal: purge the unwanted eigenvalues µ from H k. 1. Exact arithmetic case: By one step of the QR algorithm with shift µ, we have R = U H (H µi) = upper triangular Note that H µi is singular, hence R must have a zero on its diagonal. Because H is unreduced, then r nn = 0. Furthermore, note that U = P 12 P 23 P n 1,n, where P i,i+1 is a rotation in the (i,i + 1)-plane. Consequently, U is Hessenberg: ( ) U U = u u k,k 1 e T k 1 u. k,k Hence H = RU + µi = ( Ĥ ĥ 0 µ ) = U H HU.
23 In other words, one step of the shifted QR has found the eigenvalue µ exactly and has deflated the problem. 2. Finite precision arithmetic case: In the presence of rounding error, after one step of the shifted QR, we have ( ) Ĥ = Û H Ĥ HÛ = ĥ ĥ k,k 1 e T k 1 µ. From the Arnoldi decomposition, we have Partition AV k Û = V k Û(ÛT H k Û) + h k+1,k v k+1 e T kû. V k = V k Û = ( Vk 1 v k ) Then A ( V k 1 v k ) = ( V k 1 v k ) ( Ĥ ĥ ĥ k,k 1 e T k 1 µ ) +h k+1,k v k+1 (ûk,k 1 e T k 1 û
24 From the first k 1 columns of this partition, we get A V k 1 = V k 1 Ĥ + fe T k 1, (11) where f = ĥk,k 1 v k + h k+1,k û k,k 1 v k+1. Note that Ĥ is Hessenberg. f is orthogonal to V k 1. Hence (11) is an Arnoldi decomposition of length k 1. The process may be repeated to remove other unwanted values from H.
25 The symmetric Lanczos procedure Observation: in the Arnoldi decomposition, if A is symmetric, then the upper Hessenberg matrix H j is symmetric tridiagonal. The following is a simplified process to compute an orthonormal basis of a Krylov subspace: Lanczos process: 1 q 1 = v/ v 2, β 0 = 0; q 0 = 0; 2 for j = 1 to k, do 3 w = Aq j ; 4 α j = q T j w; 5 w = w α j q j β j 1 q j 1 ; 6 β j = w 2 ; 7 if β j = 0, quit; 8 q j+1 = w/β j ; 9 endfor
26 The symmetric Lanczos algorithm : governing equation Denote and T k = Q k = ( q 1 q 2... q k ) α 1 β 1 β 1 α 2 β α k 1 β k 1 = tridiag(β j, α j, β j+1 ), β k 1 α k the k-step Lanczos process yields AQ k = Q k T k + f k e T k, f k = β k q k+1 (12) and Q T k Q k = I and Q T k q k+1 = 0.
27 Let T k y = µy, y 2 = 1. Then A(Q k y) = Q k T k y + f k (e T ky) = µ(q k y) + f k (e T ky). Here µ is a Ritz value, and Q k y is the corresponding Ritz vector.
28 Error bound Lemma. Let H be symmetric, and Hz µz = r and z 0. Then min λ µ r 2. λ λ(h) z 2 Proof: Let H = UΛU T be the eigen-decomposition of H. Then (H µi)z = r U(Λ µi)u T z = r (Λ µi)(u T z) = U T r. Notice that Λ µi is diagonal. Thus as expected. r 2 = U T r 2 = (Λ µi)(u T z) 2 min λ µ λ λ(h) UT z 2 = min λ µ z 2, λ λ(h)
29 Error bound If f k (e T ky) = 0 for some k, then the associated Ritz value µ is an eigenvalue of A with the corresponding eigenvector Q k y. Let r 2 = f k (e T k y) 2, then by the lemma, we know that for the Ritz pair (µ, Q k y), there is an eigenvalue λ of A, such that λ µ f k(e T k y) 2 Q k y 2
30 Lanczos method = RR + Lanczos Simple Lanczos Algorithm: 1. q 1 = v/ v 2, β 0 = 0, q 0 = 0; 2. for j = 1 to k do 3. w = Aq j ; 4. α j = qj Tw; 5. w = w α j q j β j 1 q j 1 ; 6. β j = w 2 ; 7. if β j = 0, quit; 8. q j+1 = w/β j ; 9. Compute eigenvalues and eigenvectors of T j 10. Test for convergence 11. endfor
31 Example A = a random diagonal matrix A of order n = 1000 v = (1, 1,...,1) T Convergence behavior: 30 steps of Lanczos (full reorthogonalization) applied to A Eigenvalues Lanczos step
32 We observe that 1. Extreme eigenvalues, i.e., the largest and smallest ones, converge first, and the interior eigenvalues converge last. 2. Convergence is monotonic, with the ith largest (smallest) eigenvalues of T k increasing (decreasing) to the ith laregst (smallest) eigenvalue of A. = Convergence analysis
33 Thick Restarting Selects two indices l and u to indicate those Ritz values to be kept at both ends of spectrum: keep discard keep θ 1 θ 2 θ l θ u θ m The corresponding kept Ritz vectors are denoted by Q k = [ q 1, q 2,..., q k ] = Q m Y k, (13) where k = l + (m u + 1), (14) Y k = [y 1,y 2,...,y l, y u, y u+1,...,y m ], (15) and y i is the eigenvector of T m corresponding to θ i.
34 Sets these Ritz vectors Q k as the first k basis vectors at the restart and q k+1 = q m+1. To compute the (k + 2)th basis vector q k+2, TRLan computes A q k+1 and orthonormalizes it against the previous k + 1 basis vectors. That is, β k+1 q k+2 = A q k+1 Q k ( Q H k A q k+1 ) q k+1 ( q H k+1a q k+1 ). Note that A Q k satisfies the relation: A Q k = Q k D k + β m q k+1 s H, where D k is the k k diagonal matrix whose diagonal elements are the kept Ritz values, and s = Y H k e m. Thus, the coefficients Q H k A q k+1 can be computed efficiently: Q H k A q k+1 = (A Q k ) H q k+1 = ( Q k D k + β m q k+1 s H ) H q k+1 = D k Yk H (Q H mq m+1 ) + β m s( q k+1 q H k+1 ) = β m s.
35 Then after the first iteration after the restart, we have where A Q k+1 = Q k+1 Tk+1 + β k+1 q k+2 e H k+i, T k+1 = [ Dk β m s β m s H α k+1 In general, at the ith iteration after the restart, the new basis vector q k+i+1 satisfies the relation: A Q k+i = Q k+i T k+i + β k+i q k+i+1 e H k+i, where T k+i = Q H k+i A Q k+i is of the form D k β m s β m s H α k+1 βk+1 β T k+i = k+1 α k+2 βk β k+i 2 α k+i 1 βk+i 1 β k+i 1 α k+i ].
36 Note that the three-term recurrence is not valid only for computing the (k + 2)th basis vector and is resumed afterward.
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationKrylov Subspace Methods for Large/Sparse Eigenvalue Problems
Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationCharles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems
Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationA HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY
A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationGMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems
GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University December 4, 2011 T.-M. Huang (NTNU) GMRES
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More information13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationMULTIGRID ARNOLDI FOR EIGENVALUES
1 MULTIGRID ARNOLDI FOR EIGENVALUES RONALD B. MORGAN AND ZHAO YANG Abstract. A new approach is given for computing eigenvalues and eigenvectors of large matrices. Multigrid is combined with the Arnoldi
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationOn prescribing Ritz values and GMRES residual norms generated by Arnoldi processes
On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the
More informationABSTRACT NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS
ABSTRACT Title of dissertation: NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS Fei Xue, Doctor of Philosophy, 2009 Dissertation directed by: Professor Howard C. Elman Department
More informationEigenvalues and Eigenvectors
Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear
More informationIntroduction to Arnoldi method
Introduction to Arnoldi method SF2524 - Matrix Computations for Large-scale Systems KTH Royal Institute of Technology (Elias Jarlebring) 2014-11-07 KTH Royal Institute of Technology (Elias Jarlebring)Introduction
More informationDavidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD
Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationAlternative correction equations in the Jacobi-Davidson method
Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace
More informationIterative methods for symmetric eigenvalue problems
s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationIDR(s) as a projection method
Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute
More informationA DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to
A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More informationAugmented GMRES-type methods
Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/
More informationLecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.
Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some
More informationLecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems
Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:
More informationKrylov Subspaces. Lab 1. The Arnoldi Iteration
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra
More informationABSTRACT. Professor G.W. Stewart
ABSTRACT Title of dissertation: Residual Arnoldi Methods : Theory, Package, and Experiments Che-Rung Lee, Doctor of Philosophy, 2007 Dissertation directed by: Professor G.W. Stewart Department of Computer
More informationPolynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems
Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More informationEIGENVALUE PROBLEMS (EVP)
EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationReport Number 09/32. A Hamiltonian Krylov-Schur-type method based on the symplectic Lanczos process. Peter Benner, Heike Faßbender, Martin Stoll
Report Number 09/32 A Hamiltonian Krylov-Schur-type method based on the symplectic Lanczos process by Peter Benner, Heike Faßbender, Martin Stoll Oxford Centre for Collaborative Applied Mathematics Mathematical
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More information5 Selected Topics in Numerical Linear Algebra
5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG
More informationA communication-avoiding thick-restart Lanczos method on a distributed-memory system
A communication-avoiding thick-restart Lanczos method on a distributed-memory system Ichitaro Yamazaki and Kesheng Wu Lawrence Berkeley National Laboratory, Berkeley, CA, USA Abstract. The Thick-Restart
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationKrylov Subspace Methods to Calculate PageRank
Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationInstitute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum
More informationA Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems
A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)
More informationACM 104. Homework Set 5 Solutions. February 21, 2001
ACM 04 Homework Set 5 Solutions February, 00 Franklin Chapter 4, Problem 4, page 0 Let A be an n n non-hermitian matrix Suppose that A has distinct eigenvalues λ,, λ n Show that A has the eigenvalues λ,,
More informationThe German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.
Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg
More informationHessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods
Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Jens Peter M. Zemke Minisymposium on Numerical Linear Algebra Technical
More informationNumerical Solution of Linear Eigenvalue Problems
Numerical Solution of Linear Eigenvalue Problems Jessica Bosch and Chen Greif Abstract We review numerical methods for computing eigenvalues of matrices We start by considering the computation of the dominant
More informationEfficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis
Efficient Methods For Nonlinear Eigenvalue Problems Diploma Thesis Timo Betcke Technical University of Hamburg-Harburg Department of Mathematics (Prof. Dr. H. Voß) August 2002 Abstract During the last
More informationA Jacobi Davidson Method for Nonlinear Eigenproblems
A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.
More informationData Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples
Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationFast iterative solvers
Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for
More informationA hybrid reordered Arnoldi method to accelerate PageRank computations
A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationMatrix functions and their approximation. Krylov subspaces
[ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th
More informationLecture 9 Least Square Problems
March 26, 2018 Lecture 9 Least Square Problems Consider the least square problem Ax b (β 1,,β n ) T, where A is an n m matrix The situation where b R(A) is of particular interest: often there is a vectors
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationRecycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB
Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements
More information