Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Size: px
Start display at page:

Download "Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method"

Transcription

1 Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts Harmonic Ritz values, Jacobi-Davidson s method

2 Origins of Eigenvalue Problems Structural Engineering [Ku = λmu] Electronic structure calculations [Schrödinger equation..] Stability analysis [e.g., electrical networks, mechanical system,..] Bifurcation analysis [e.g., in fluid flow] Large sparse eigenvalue problems are among the most demanding calculations (in terms of CPU time) in scientific computing eig1

3 New application in information technology Search engines (google) rank web-sites in order to improve searches The google toolbar on some browsers ( - gives a measure of relevance of a page. The problem can be formulated as a Markov chain Seek the dominant eigenvector Algorithm used: power method For details see: eig1

4 The Problem We consider the eigenvalue problem Ax = λx or Ax = λbx Typically: B is symmetric (semi) positive definite, A is symmetric or nonsymmetric Requirements vary: Compute a few λ i s with smallest or largest real parts; Compute all λ i s in a certain region of C; Compute a few of the dominant eigenvalues; Compute all λ i s eig1

5 Types of problems * Standard Hermitian (or symmetric real) Ax = λx, A H = A * Standard non-hermitian Ax = λx, A H A * Generalized Ax = λbx Several distinct sub-cases (B SPD, B SSPD, B singular with large null space, both A and B singular, etc..) * Quadratic (A + λb + λ 2 C)x = 0 * Nonlinear A(λ)x = eig1

6 General Tools for Solving Large Eigen-Problems Projection techniques Arnoldi, Lanczos, Subspace Iteration; Preconditionings: shift-and-invert, Polynomials,... Deflation and restarting techniques Good computational codes combine these 3 ingredients 17-6 eig1

7 A few popular solution Methods Subspace Iteration [Now less popular sometimes used for validation] Arnoldi s method (or Lanczos) with polynomial acceleration [Stiefel 58, Rutishauser 62, YS 84, 85, Sorensen 89,...] Shift-and-invert and other preconditioners. [Use Arnoldi or Lanczos for (A σi) 1.] Davidson s method and variants, Generalized Davidosn s method [Morgan and Scott, 89], Jacobi-Davidson 17-7 eig1

8 Projection Methods for Eigenvalue Problems General formulation: Projection method onto K orthogonal to L Given: Two subspaces K and L of same dimension. Find: λ, ũ such that λ C, ũ K; ( λi A)ũ L Two types of methods: Orthogonal projection methods: situation when L = K. Oblique projection methods: When L K eig1

9 Rayleigh-Ritz projection Given: a subspace X known to contain good approximations to eigenvectors of A. Question: How to extract good approximations to eigenvalues/ eigenvectors from this subspace? Answer: Rayleigh Ritz process. Let Q = [q 1,..., q m ] an orthonormal basis of X. Then write an approximation in the form ũ = Qy and obtain y by writing Q H (A λi)ũ = 0 Q H AQy = λy 17-9 eig1

10 Procedure: 1. Obtain an orthonormal basis of X 2. Compute C = Q H AQ (an m m matrix) 3. Obtain Schur factorization of C, C = Y RY H 4. Compute Ũ = QY Property: if X is (exactly) invariant, then procedure will yield exact eigenvalues and eigenvectors. Proof: Since X is invariant, (A λi)u = Qz for a certain z. Q H Qz = 0 implies z = 0 and therefore (A λi)u = 0. Can use this procedure in conjunction with the subspace obtained from subspace iteration algorithm eig1

11 Subspace Iteration Original idea: projection technique onto a subspace if the form Y = A k X In practice: Replace A k by suitable polynomial [Chebyshev] Advantages: Disadvantage: Slow. Easy to implement (in symmetric case); Easy to analyze; Often used with polynomial acceleration: A k X replaced by C k (A)X. Typically C k = Chebyshev polynomial eig1

12 Algorithm: Subspace Iteration with Projection 1. Start: Choose an initial system of vectors X = [x 0,..., x m ] and an initial polynomial C k. 2. Iterate: Until convergence do: (a) Compute Ẑ = C k (A)X old. (b) Orthonormalize Ẑ into Z. (c) Compute B = Z H AZ and use the QR algorithm to compute the Schur vectors Y = [y 1,..., y m ] of B. (d) Compute X new = ZY. (e) Test for convergence. If satisfied stop. Else select a new polynomial C k and continue.

13 THEOREM: Let S 0 = span{x 1, x 2,..., x m } and assume that S 0 is such that the vectors {P x i } i=1,...,m are linearly independent where P is the spectral projector associated with λ 1,..., λ m. Let P k the orthogonal projector onto the subspace S k = span{x k }. Then for each eigenvector u i of A, i = 1,..., m, there exists a unique vector s i in the subspace S 0 such that P s i = u i. Moreover, the following inequality is satisfied ( λ m+1 (I P k )u i 2 u i s i 2 λ i where ɛ k tends to zero as k tends to infinity. k k) + ɛ, (1) eig1

14 Krylov subspace methods Principle: Projection methods on Krylov subspaces, i.e., on K m (A, v 1 ) = span{v 1, Av 1,, A m 1 v 1 } probably the most important class of projection methods [for linear systems and for eigenvalue problems] many variants exist depending on the subspace L. Properties of K m. Let µ = deg. of minimal polynom. of v. Then, K m = {p(a)v p = polynomial of degree m 1} K m = K µ for all m µ. Moreover, K µ is invariant under A. dim(k m ) = m iff µ m eig1

15 Arnoldi s Algorithm Goal: to compute an orthogonal basis of K m. Input: Initial vector v 1, with v 1 2 = 1 and m. ALGORITHM : 1 Arnoldi s procedure For j = 1,..., m do Compute w := Av j For i = 1,..., j, do { hi,j := (w, v i ) w := w h i,j v i h j+1,j = w 2 ; v j+1 = w/h j+1,j End eig1

16 Result of Arnoldi s algorithm Let H m = x x x x x x x x x x x x x x x x x x x x ; H m = H m (1 : m, 1 : m) 1. V m = [v 1, v 2,..., v m ] orthonormal basis of K m. 2. AV m = V m+1 H m = V m H m + h m+1,m v m+1 e T m 3. V T m AV m = H m H m last row eig1

17 Appliaction to eigenvalue problems Write approximate eigenvector as ũ = V m y + Galerkin condition (A λi)v m y K m V H m (A λi)v m y = 0 Approximate eigenvalues are eigenvalues of H m H m y j = λ j y j Associated approximate eigenvectors are ũ j = V m y j Typically a few of the outermost eigenvalues will converge first eig1

18 Restarted Arnoldi In practice: Memory requirement of algorithm implies restarting is necessary Restarted Arnoldi for computing rightmost eigenpair: ALGORITHM : 2 Restarted Arnoldi 1. Start: Choose an initial vector v 1 and a dimension m. 2. Iterate: Perform m steps of Arnoldi s algorithm. 3. Restart: Compute the approximate eigenvector u (m) 1 4. associated with the rightmost eigenvalue λ (m) If satisfied stop, else set v 1 u (m) 1 and goto eig1

19 Example: Small Markov Chain matrix [ Mark(10), dimension = 55]. Restarted Arnoldi procedure for computing the eigenvector associated with the eigenvalue with algebraically largest real part. We use m = 10. m R(λ) I(λ) Res. Norm D D D D D D D D D D eig1

20 Restarted Arnoldi (cont.) Can be generalized to more than *one* eigenvector : p v (new) 1 = ρ i u (m) i i=1 However: often does not work well (hard to find good coefficients ρ i s) Alternative : compute eigenvectors (actually Schur vectors) one at a time. Implicit deflation eig1

21 Deflation Very useful in practice. Different forms: locking (subspace iteration), selective orthogonalization (Lanczos), Schur deflation,... A little background Consider Schur canonical form A = URU H where U is a (complex) upper triangular matrix. Vector columns u 1,..., u n called Schur vectors. Note: Schur vectors depend on each other, and on the order of the eigenvalues

22 Wiedlandt Deflation: Assume we have computed a right eigenpair λ 1, u 1. Wielandt deflation considers eigenvalues of Note: A 1 = A σu 1 v H Λ(A 1 ) = {λ 1 σ, λ 2,..., λ n } Wielandt deflation preserves u 1 as an eigenvector as well all the left eigenvectors not associated with λ 1. An interesting choice for v is to take simply v = u 1. In this case Wielandt deflation preserves Schur vectors as well. Can apply above procedure successively eig1

23 ALGORITHM : 3 Explicit Deflation 1. A 0 = A 2. For j = 0... µ 1 Do: 3. Compute a dominant eigenvector of A j 4. Define A j+1 = A j σ j u j u H j 5. End Computed u 1, u 2.,.. form a set of Schur vectors for A. Alternative: implicit deflation (within a procedure such as Arnoldi) eig1

24 Deflated Arnoldi When first eigenvector converges, put it in 1st column of V m = [v 1, v 2,..., v m ]. Arnoldi will now start at column 2, orthogonaling still against v 1,..., v j at step j. Accumulate each new converged eigenvector in columns 2, 3,... [ locked set of eigenvectors.] [ ] active {}}{ Thus, for k = 2: V m = v } 1 {{, v } 2, v 3,..., v m Locked H m =

25 Similar techniques in Subspace iteration [G. Stewart s SRRIT] Example: Matrix Mark(10) small Markov chain matrix (N = 55). First eigenpair by iterative Arnoldi with m = 10. m Re(λ) Im(λ) Res. Norm D D D D D D D D D D eig1

26 Computing the next 2 eigenvalues of Mark(10). Eig. Mat-Vec s Re(λ) Im(λ) Res. Norm D D D D D D D D eig1

27 Hermitian case: The Lanczos Algorithm The Hessenberg matrix becomes tridiagonal : A = A H and V H m AV m = H m H m = H H m We can write H m = α 1 β 2 β 2 α 2 β 3 β 3 α 3 β β m α m (2) Consequence: three term recurrence β j+1 v j+1 = Av j α j v j β j v j eig1

28 ALGORITHM : 4 Lanczos 1. Choose v 1 of norm unity. Set β 1 0, v For j = 1, 2,..., m Do: 3. w j := Av j β j v j 1 4. α j := (w j, v j ) 5. w j := w j α j v j 6. β j+1 := w j 2. If β j+1 = 0 then Stop 7. v j+1 := w j /β j+1 8. EndDo Hermitian matrix + Arnoldi Hermitian Lanczos In theory v i s defined by 3-term recurrence are orthogonal. However: in practice severe loss of orthogonality; eig1

29 Lanczos with reorthogonalization Observation [Paige, 1981]: Loss of orthogonality starts suddenly, when the first eigenpair converges. It indicates loss of linear indedependence of the v i s. When orthogonality is lost, then several copies of the same eigenvalue start appearing. Full reorthogonalization reorthogonalize v j+1 against all previous v i s every time. Partial reorthogonalization reorthogonalize v j+1 against all previous v i s only when needed [Parlett & Simon] Selective reorthogonalization reorthogonalize v j+1 against computed eigenvectors [Parlett & Scott] No reorthogonalization Do not reorthogonalize - but take measures to deal with spurious eigenvalues. [Cullum & Willoughby] eig1

30 Partial reorthogonalization Partial reorthogonalization: reorthogonalize only when deemed necessary. Main question is when? Uses an inexpensive recurrence relation Work done in the 80 s [Parlett, Simon, and co-workers] + more recent work [Larsen, 98] Package: PROPACK [Larsen] V 1: 2001, most recent: V 2.1 (Apr. 05) Often, need for reorthogonalization not too strong eig1

31 The Lanczos Algorithm in the Hermitian Case Assume eigenvalues sorted increasingly λ 1 λ 2 λ n Orthogonal projection method onto K m ; To derive error bounds, use the Courant characterization λ 1 = λ j = min u K, u 0 { min u K, u 0 u ũ 1,...,ũ j 1 (Au, u) (u, u) = (Aũ 1, ũ 1 ) (ũ 1, ũ 1 ) (Au, u) (u, u) = (Aũ j, ũ j ) (ũ j, ũ j ) eig1

32 Bounds for λ 1 easy to find similar to linear systems. Ritz values approximate eigenvalues of A inside out: λ 1 λ 2 λ n 1 λ n λ 1 λ2 λ n 1 λn eig1

33 A-priori error bounds Theorem [Kaniel, 1966]: 0 λ (m) 1 λ 1 (λ N λ 1 ) [ tan (v1, u 1 ) T m 1 (1 + 2γ 1 ) where γ 1 = λ 2 λ 1 λ N λ 2 ; and (v 1, u 1 ) = angle between v 1 and u 1. + results for other eigenvalues. [Kaniel, Paige, YS] Theorem 0 λ (m) i λ i (λ N λ 1 ) where γ i = λ i+1 λ i λ N λ i+1, κ (m) i [ = j<i κ (m) i tan (v i, u i ) ] 2 T m i (1 + 2γ i ) λ (m) j λ N λ (m) j λ i eig1 ] 2

34 The Lanczos biorthogonalization (A H A) ALGORITHM : 5 Lanczos bi-orthogonalization 1. Choose two vectors v 1, w 1 such that (v 1, w 1 ) = Set β 1 = δ 1 0, w 0 = v For j = 1, 2,..., m Do: 4. α j = (Av j, w j ) 5. ˆv j+1 = Av j α j v j β j v j 1 6. ŵ j+1 = A T w j α j w j δ j w j 1 7. δ j+1 = (ˆv j+1, ŵ j+1 ) 1/2. If δ j+1 = 0 Stop 8. β j+1 = (ˆv j+1, ŵ j+1 )/δ j+1 9. w j+1 = ŵ j+1 /β j v j+1 = ˆv j+1 /δ j+1 11.EndDo eig1

35 Builds a pair of biorthogonal bases for the two subspaces K m (A, v 1 ) and K m (A H, w 1 ) Many choices for δ j+1, β j+1 in lines 7 and 8. Only constraint: δ j+1 β j+1 = (ˆv j+1, ŵ j+1 ) Let T m = α 1 β 2 δ 2 α 2 β 3... δ m 1 α m 1 β m δ m α m. v i K m (A, v 1 ) and w j K m (A T, w 1 ) eig1

36 If the algorithm does not break down before step m, then the vectors v i, i = 1,..., m, and w j, j = 1,..., m, are biorthogonal, i.e., (v j, w i ) = δ ij 1 i, j m. Moreover, {v i } i=1,2,...,m is a basis of K m (A, v 1 ) and {w i } i=1,2,...,m is a basis of K m (A H, w 1 ) and AV m = V m T m + δ m+1 v m+1 e H m, A H W m = W m T H m + β m+1 w m+1 e H m, W H m AV m = T m eig1

37 If θ j, y j, z j are, respectively an eigenvalue of T m, with associated right and left eigenvectors y j and z j respectively, then corresponding approximations for A are Ritz value Right Ritz vector Left Ritz vector θ j V m y j W m z j [Note: terminology is abused slightly - Ritz values and vectors normally refer to Hermitian cases.] eig1

38 Advantages and disadvantages Advantages: Nice three-term recurrence requires little storage in theory. Computes left and a right eigenvectors at the same time Disadvantages: Algorithm can break down or nearly break down. Convergence not too well understood. Erratic behavior Not easy to take advantage of the tridiagonal form of T m eig1

39 Look-ahead Lanczos Algorithm breaks down when: (ˆv j+1, ŵ j+1 ) = 0 Three distinct situations. lucky breakdown when either ˆv j+1 or ŵ j+1 is zero. In this case, eigenvalues of T m are eigenvalues of A. (ˆv j+1, ŵ j+1 ) = 0 but of ˆv j+1 0, ŵ j+1 0 serious breakdown. Often possible to bypass the step (+ a few more) and continue the algorithm. If this is not possible then we get an Incurable break-down. [very rare] eig1

40 Look-ahead Lanczos algorithms deal with the second case. See Parlett 80, Freund and Nachtigal Main idea: when break-down occurs, skip the computation of v j+1, w j+1 and define v j+2, w j+2 from v j, w j. For example by orthogonalizing A 2 v j... Can define v j+1 somewhat arbitrarily as v j+1 = Av j. Similarly for w j+1. Drawbacks: (1) projected problem no longer tridiagonal (2) difficult to know what constitutes near-breakdown eig1

41 Preconditioning eigenvalue problems Goal: To extract good approximations to add to a subspace in a projection process. Result: faster convergence. Best known technique: Shift-and-invert; Work with B = (A σi) 1 Some success with polynomial preconditioning [Chebyshev iteration / least-squares polynomials]. Work with B = p(a) Above preconditioners preserve eigenvectors. Other methods (Davidson) use a more general preconditioner M eig2

42 Shift-and-invert preconditioning Main idea: to use Arnoldi, or Lanczos, or subspace iteration for the matrix B = (A σi) 1. The matrix B need not be computed explicitly. Each time we need to apply B to a vector we solve a system with B. Factor B = A σi = LU. Then each solution Bx = y requires solving Lz = y and Ux = z. How to deal with complex shifts? If A is complex need to work in complex arithmetic. If A is real, then instead of (A σi) 1 use Re(A σi) 1 = 1 [ 2 (A σi) 1 + (A σi) 1] eig2

43 Preconditioning by polynomials Main idea: Iterate with p(a) instead of A in Arnoldi or Lanczos,.. Used very early on in subspace iteration [Rutishauser, 1959.] Usually not as reliable as Shift-and-invert techniques but less demanding in terms of storage eig2

44 Question: How to find a good polynomial (dynamically)? 1 Use of Chebyshev polynomials over ellipses Approaches: 2 Use polynomials based on Leja points 3 Least-squares polynomials over polygons 4 Polynomials from previous Arnoldi decompositions eig2

45 Polynomial filters and implicit restart Goal: exploit the Arnoldi procedure to apply polynomial filter of the form: p(t) = (t θ 1 )(t θ 2 )... (t θ q ) Assume AV m = V m H m + ˆv m+1 e T m and consider first factor: (t θ 1 ) (A θ 1 I)V m = V m (H m θ 1 I) + ˆv m+1 e T m Let H m θ 1 I = Q 1 R 1. Then, (A θ 1 I)V m = V m Q 1 R 1 + ˆv m+1 e T m (A θ 1 I)(V m Q 1 ) = (V m Q 1 )R 1 Q 1 + ˆv m+1 e T m Q 1 A(V m Q 1 ) = (V m Q 1 )(R 1 Q 1 + θ 1 I) +ˆv m+1 e T m Q eig2

46 Notation: R 1 Q 1 + θ 1 I H (1) m ; AV m (1) = V m (1) Note that H (1) m (b(1) m+1) T e T m Q 1; V m Q 1 V m (1) H(1) is upper Hessenberg. Similar to an Arnoldi decomposition. Observe: m + v m+1(b (1) m+1) T R 1 Q 1 + θ 1 I matrix resulting from one step of the QR algorithm with shift θ 1 applied to H m. First column of V m (1) is a multiple of (A θ 1I)v 1. The columns of V m (1) are orthonormal eig2

47 Can now apply second shift in same way: (A θ 2 I)V (1) m = V (1) m Similar process: (H (1) m (H(1) m θ 2I) + v m+1 (b (1) m+1) T θ 2I) = Q 2 R 2 then Q 2 to the right: (A θ 2 I)V (1) m Q 2 = (V (1) m Q 2)(R 2 Q 2 ) + v m+1 (b (1) m+1) T Q 2 AV (2) m = V m (2) H(2) m + v m+1(b (2) m+1) T Now: 1st column of V m (2) = scalar (A θ 2I)v (1) 1 = scalar (A θ 2 I)(A θ 1 I)v eig2

48 Note that (b (2) m+1) T = e T m Q 1Q 2 = [0, 0,, 0, η 1, η 2, η 3 ] Let: ˆV m 2 = [ˆv 1,..., ˆv m 2 ] consist of first m 2 columns of V m (2) and Ĥ m 2 = H m (1 : m 2, 1 : m 2). Then A ˆV m 2 = ˆV m 2 Ĥ m 2 + ˆβ m 1ˆv m 1 e T m with ˆβ m 1ˆv m 1 η 1 v m+1 + h (2) m 1,m 2v (2) m 1 ˆv m 1 2 = 1 Result: An Arnoldi process of m 2 steps with the initial vector p(a)v 1. In other words: We know how to apply polynomial filtering via a form of the Arnoldi process, combined with the QR algorithm eig2

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-30, 2005 Outline Part 1 Introd., discretization

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-30, 2005 Outline Part 1 Introd., discretization

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

ABSTRACT NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS

ABSTRACT NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS ABSTRACT Title of dissertation: NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS Fei Xue, Doctor of Philosophy, 2009 Dissertation directed by: Professor Howard C. Elman Department

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis Efficient Methods For Nonlinear Eigenvalue Problems Diploma Thesis Timo Betcke Technical University of Hamburg-Harburg Department of Mathematics (Prof. Dr. H. Voß) August 2002 Abstract During the last

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

ABSTRACT. Professor G.W. Stewart

ABSTRACT. Professor G.W. Stewart ABSTRACT Title of dissertation: Residual Arnoldi Methods : Theory, Package, and Experiments Che-Rung Lee, Doctor of Philosophy, 2007 Dissertation directed by: Professor G.W. Stewart Department of Computer

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils A Structure-Preserving Method for Large Scale Eigenproblems of Skew-Hamiltonian/Hamiltonian (SHH) Pencils Yangfeng Su Department of Mathematics, Fudan University Zhaojun Bai Department of Computer Science,

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

The QR Algorithm. Marco Latini. February 26, 2004

The QR Algorithm. Marco Latini. February 26, 2004 The QR Algorithm Marco Latini February 26, 2004 Contents 1 Introduction 2 2 The Power and Inverse Power method 2 2.1 The Power Method............................... 2 2.2 The Inverse power method...........................

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

On the Ritz values of normal matrices

On the Ritz values of normal matrices On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific

More information

MICHIEL E. HOCHSTENBACH

MICHIEL E. HOCHSTENBACH VARIATIONS ON HARMONIC RAYLEIGH RITZ FOR STANDARD AND GENERALIZED EIGENPROBLEMS MICHIEL E. HOCHSTENBACH Abstract. We present several variations on the harmonic Rayleigh Ritz method. First, we introduce

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Singular Value Computation and Subspace Clustering

Singular Value Computation and Subspace Clustering University of Kentucky UKnowledge Theses and Dissertations--Mathematics Mathematics 2015 Singular Value Computation and Subspace Clustering Qiao Liang University of Kentucky, qiao.liang@uky.edu Click here

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Dr. ir. Gerrit C. Groenenboom 14 june - 18 june, 1993 Last revised: May 11, 2005 Contents 1 Introduction

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Block Krylov Space Solvers: a Survey

Block Krylov Space Solvers: a Survey Seminar for Applied Mathematics ETH Zurich Nagoya University 8 Dec. 2005 Partly joint work with Thomas Schmelzer, Oxford University Systems with multiple RHSs Given is a nonsingular linear system with

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1 Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Au = λu Example: ( ) 2 0 A = 2 1 λ 1 = 1 with eigenvector

More information

Fast iterative solvers

Fast iterative solvers Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information