A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems

Size: px
Start display at page:

Download "A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems"

Transcription

1 A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems C. G. Baker P.-A. Absil K. A. Gallivan Technical Report FSU-SCS Submitted June 7, 2005 Abstract A general inner-outer iteration for computing extreme eigenpairs of symmetric/positive-definite matrix pencils is proposed. The principle of the method is to produce a sequence of p-dimensional bases {X k } that converge to a minimizer of a generalized Rayleigh quotient. The role of the inner iteration is to produce an update vector by (approximately) minimizing a quadratic model of the Rayleigh quotient within a neighbourhood of X k where the model is trusted. The role of the outer iteration is to make the best out of the proposed update vector combined with previously-obtained information; it consists of a Rayleigh-Ritz process that minimizes the exact Rayleigh quotient in a low-dimensional subspace. This general scheme leaves a lot of leeway for choosing the algorithmic details of the inner and outer iterations. The global and local convergence of the scheme are analytically studied under weak assumptions on these algorithmic choices. Moreover, numerous experiments are carried out to explore the influence of the algorithmic choices on the performance of the scheme. In particular, the question of balancing the computational effort between the inner and outer iterations is investigated. School of Computational Science, Florida State University, Tallahassee, FL , USA ( cbaker, absil, gallivan}). This work was supported by the USA National Science Foundation under Grants ACI and CCR , and by the School of Computational Science of Florida State University through a postdoctoral fellowship. 1

2 1 Introduction The generalized symmetric eigenvalue problem occurs in many applications in the engineering and scientific disciplines. Given a symmetric matrix A and a symmetric positive definite matrix B, the generalized symmetric eigenvalue problem seeks to find nonzero vectors x and scalars λ satisfying Ax = λbx. Here λ is an eigenvalue and x is an eigenvector associated with λ. Let λ 1 λ 2... λ n be the eigenvalues of (A,B), where n is the order of the two matrices. It is known that the eigenvalues and their associated eigenvectors satisfy optimality conditions with respect to the generalized Rayleigh quotient. That is, the function f : R n p matrices, R, where R n p is the set of all rank p f(x) = trace ( (X T BX) 1 (X T AX) ) (1) is minimized by any matrix whose column space corresponds to the eigenspace associated with the leftmost eigenvalues of (A, B). The minimum at this point is the sum of the p leftmost eigenvalues. It is easily verified that the function varies only with the column space of X. It follows that f can be restricted to the set of all p-dimensional subspaces of R n. This set is termed the Grassmann manifold. Absil et al.[abg04c, ABG05] recently described the Riemannian Trust- Region (RTR) algorithm. The RTR algorithm is a method for finding the extreme points of a function defined on a Riemannian manifold. Under mild regularity conditions, the method is proven to always converge to critical points. Furthermore, saddle points are unstable, so that the algorithm converges to a local minimum in practice. If the local minimum is nondegenerate, then the convergence is superlinear. Assume that λ p < λ p+1, so that the leftmost eigenspace is well-defined. Then there is a unique local minimizer of the generalized Rayleigh quotient (1) defined on the Grassmann manifold. Furthermore, the convergence conditions for the RTR algorithm are satisfied by the generalized Rayleigh quotient. Therefore, applying the RTR algorithm yields a superlinear method which converges to the leftmost eigenspace for virtually all initial conditions. The authors show [ABG04a] that the RTR algorithm, directly applied to this problem, is competitive with many of the specialized methods that have been developed for extreme eigenspace computation. 2

3 The method has been shown [ABG04a] to be related to existing eigenvalue solvers, especially the Trace Minimization [SW82, ST00] and Jacobi- Davidson methods [SV96]. As such, techniques used by these methods, as well as other methods (such as [Kny01]), can be used to augment RTR. One technique is the idea of subspace acceleration, where a subspace is constructed and the next iterate is chosen using a Ritz-Rayleigh procedure. We two approaches for constructing these subspaces that come from Jacobi-Davidson and. Another approach is that of adaptive methods. One method that will be explored in this paper is a Tracemin/RTR hybrid, which initially uses Tracemin before switching to RTR. Another method combines a like subspace acceleration with RTR-like update vectors. 2 The general algorithm Here points on the Grassmann manifold M are represented by orthonormal bases, so that X T BX = I is assumed throughout. The tangent space to M at colsp(x) is represented by T X := {η R n p : X T Bη = 0}, i.e., the set of R n p matrices whose columns are B-orthogonal to the column space of X. Let ˆf denote the cost function (1) restricted to T X : ˆf X (η) = f(x + η) ( ((X = trace + η) T B(X + η) ) 1 ( (X + η) T A(X + η) )) (2) The standard RTR [ABG04a] attempts to minimize at each iterate X a quadratic model m X of ˆf X, m X (η) = ˆf(X) ( + trace η T grad ˆf(X) ) trace ( η T H X [η] ), η T X (3) within the current trust-region radius (trace(η T η) ). In (3), the firstorder term is determined by grad ˆf(X) = 2P BX,BX AX. The operator H X is some approximation to the Hessian of ˆf at X, which need be available only as an operator on the tangent bundle. Possible choices for the model Hessian H X are discussed in Section 5. The performance of the quadratic model determines whether the trustregion radius is enlarged or reduced. When not performing subspace acceleration, the model performance determines whether the proposed iterate is accepted or not. The performance of the quadratic model is measured 3

4 according to the following ratio: ρ(η) = f(x) f(x + η) m(0) m(η) A general framework follows. We use gsb to denote a Gram-Schmidt orthonormalization using the B inner product. Algorithm 1 (Subspace Accelerated RTR) Data: Symmetric matrices A and B, B positive definite Parameters: maximum rank of acceleration subspace m, initial trust-region radius 0, model performance mark ρ Input: initial iterate X 0, X T 0 BX 0 = I p. Output: sequence of iterates {X k }. for k = 0,1,2,... Part I: Model-based Minimization Compute η k to approximately minimize m Xk within the trust-region {η : trace(η T η) k }, using tcg (Section 4) Adjust trust-region radius: if ρ(η k ) > 3/4 and η k = k+1 = 2 k else if ρ(η k ) < 1/4 k+1 = k /4 else k+1 = k end Part II: Generate next iterate if performing subspace acceleration Update the acceleration subspaces using η k (Section 3) Generate X k+1 using a Ritz-Rayleigh procedure else if ρ(η k ) > ρ X k+1 = gsb(x k + η k ) else X k+1 = X k end end Check gradf(x k+1 ) end for. 4

5 The issue remains of the method for solving the model trust-region minimization at each iteration, as well as the technique used to update the acceleration subspace. We will discuss the latter in Section 3 and the former in Section 4. 3 Subspace Acceleration Strategies In this section, we propose practical techniques for carrying out the subspace acceleration procedure involved in Part II of Algorithm 1. We will discuss two methods from the literature for constructing the acceleration subspace, one from Jacobi-Davidson and another inspired by and the sequential subspace method of Hager[Hag01]. The first subspace acceleration strategy comes directly from Jacobi- Davidson. It is referred to here as Expanding Subspaces (ES), and can be described as follows. Take a basis U for a subspace U, which we desire to update using some η. Compute the new subspace U + = colsp([u,η]). If the subspace has already reached the maximum allowable size (m), then reset it to U + = colsp([x,η]), where X is the current iterate. The second subspace acceleration strategy is a simple extension of the technique used in. Knyazev [Kny01] proposes a non-linear conjugate gradient approach which takes advantage of the nature of the generalized Rayleigh quotient to perform exact searches using a Ritz-Rayleigh procedure. At each step, the next iterate X k+1 is chosen using a Ritz-Rayleigh procedure with respect to the subspace given by colsp([x k,h k,p k ]), where H k is the preconditioned residual (gradient) and P k is an analogue of the CG search direction. The components of the primitive Ritz vectors define the coefficients that combine X k, H k and P k to form X k+1. The new preconditioned residual H k+1 is determined by X k+1, and the P k+1 is computed via a CG-like recurrence. This method is stated explicitly in Algorithm 2. Algorithm 2 () Data: Symmetric matrices A and B, B positive definite, preconditioner T Input: initial iterate X 0, X T 0 BX 0 = I p. Output: sequence of iterates {X k }. Initialize: P 0 = 0 for k = 0,1,2,... Build search space for j = 1,...,p Compute Rayleigh quotients µ j = (x T j Ax j)/(x T j Bx j), where x j = X k e j 5

6 Compute residuals r j = Ax j µ j Bx j end for Check r j for convergence Apply preconditioner: H k = [h 1,...,h p ] = [Tr 1,...,Tr p ] Set V k = [X k,h k,p k ] Ritz-Rayleigh procedure Form  = V T k AV k and ˆB = V T k BV k Compute p smallest eigenpairs (u k, ˆλ k ) of (Â, ˆB) X k+1 = V k U k, where U k = [u 1,...,u p ] P k+1 = [H k,p k ] U k (p + 1 : 3 p,:) end for. Note that although the first Ritz-Rayleigh process is carried out with p 0 = 0 on a two-dimensional subspace, subsequent steps occur on threedimensional subspaces. Therefore, there is no restarting like in the Expanding Subspaces method. To our knowledge, this technique has only been applied using three blocks of vectors: X k, H k, and P k. We propose a simple extension that makes use of previous search directions as well, so that the Ritz-Rayleigh process operates on a larger, m-dimensional subspace (m = bp, were b is the number of blocks). Without discussing here the nature of the vectors H k (see Section 4), we describe one step of a subspace acceleration algorithm, which we refer to as Sequential Subspaces (SS). Algorithm 3 (Sequential Subspace step) Data: Symmetric matrices A and B, B positive definite Input: Acceleration subspace basis V k = [X k,h k,p k b+3,...,p k ] of dimension m = b p. Output: iterate X k+1, X T k+1 BX k+1 = I p, and new search direction P k+1 Ritz-Rayleigh procedure Form  = V T k AV k and ˆB = V T k BV k Compute p smallest eigenpairs (u k, ˆλ k ) of (Â, ˆB) Compute next iterate X k+1 = V k U k, where U k = [u 1,...,u p ] Compute next search direction 6

7 P k+1 = [H k,p k b+3,...,p k ] U k (p + 1 : b p,:) 4 Model Minimization Strategies In classical RTR, the model minimization can be solved using a Steihaug- Toint truncated CG (tcg) [ABG04c]. This version of CG checks the curvature of the model (i.e., the definiteness of the Hessian) for each search direction. Upon encountering a direction of negative curvature, it is followed to the edge of the trust region. This is one of the major differences between RTR and JD-like methods, as this curvature mechanism is explicitly attempting to minimize the quadratic model, instead of simply searching for its critical point (which is not a minimizer when the model is not positive definite.) The second truncation mechanism in tcg is the trust region constraint. The underlying idea is that the quadratic model (3) is an accurate approximation of the cost function (2) only in the vicinity of η = 0; far away from the origin, the model becomes irrelevant. Because tcg builds a solution iteratively from a sequence of search directions, it is simple to test the length of the intermediate solutions and to note when (and along which direction) the trust region is breached. On this occasion, the solution is only allowed to move (in the current direction) to the edge of the trust region. Finally, there is the matter of a stopping criterion. Because tcg uses conjugate search directions, the number of iterations should be limited to the dimension of the search space. This search space is the space orthogonal to the current iterate, with dimension d = p(n p). In order to avoid a waste of computational effort without losing superlinear convergence, the following residual-based stopping criterion is used: r i r 0 min(κ,( r 0 /σ) θ ), (4) where r i is the residual of the model at inner step i, and κ and θ are terms targeting the rate of convergence (see [ABG04b]). This condition varies from the condition in [ABG04b] with the introduction of σ. For problems whose scaling is such that the method converges before the norm of the gradient becomes smaller than 1, then κ < 1 < r 0 θ, so that superlinear convergence is never attempted. The σ term seeks to adjust the scaling, with the suggestion that σ is chosen as σ = gradf(x 0 ). A p = 1, unpreconditioned version of truncated CG algorithm follows. We use, to denote the standard inner product. 7

8 Algorithm 4 (Truncated CG) Set η 0 = 0, r 0 = P,Bx k Bx k Ax k, δ 0 = r 0 ; for j = 0, 1, 2,... until a stopping criterion is satisfied Check curvature of current search direction if δ j, H xk [δ j ] 0 Compute τ such that η = η j + τδ j satisfies η = ; return η; Set α j = r j,r j / δ j, H xk [δ j ] ; Generate next inner iterate Set η j+1 = η j + α j δ j ; Check trust-region if η j+1 Compute τ 0 such that η = η j + τδ j satisfies η = ; return η; Use CG recurrences to update residual and search direction Set r j+1 = r j + α j H xk [δ j ]; Set β j+1 = r j+1,r j+1 / r j,r j ; Set δ j+1 = r j+1 + β j+1 δ j ; end for. Another parametrization regards the stopping criteria. A limit on the number of iterations may be imposed ( returns the preconditioned residual, implicitly allowing one/zero inner iteration, depending on how iterations are number). The residual-based stopping criterion from classical RTR may be used (as described above), or the residual-based stopping criterion suggested by Notay (equations (27) and (28) from [Not02]) for use in JDCG can be employed. The question becomes, what is the trade-off between the amount of work on the model minimization and the performance of the algorithm. 5 Choice of the model Having described the technique used to minimize the model, the remaining issue is the choice of the model. We have chosen to approximate the Rayleigh quotient via a quadratic function, agreeing through the first order. The issue becomes, what are the benefits of different choices for the model Hessian? As shown in [ABGS05], different choices for the quadratic model may be 8

9 appropriate, depending on the circumstances. The two choices discussed here are an inexact Hessian from the Trace Minimization method [SW82, ST00] and the exact Hessian from the classical RTR described in the original presentation [ABG04b]. The Tracemin Hessian is H X [η] = 2(P BX,BX AP BX,BX )η, (5) where P BX,BX = I BX(X T BBX) 1 X T B is a projector onto the tangent plane at X. The Hessian from the classical RTR is the exact Hessian of the function: H X [η] = Hess X [η] = 2P BX,BX (Aη Bη(X T AX)) (6) Using the exact Hessian yields a quadratic model that closely represents the Rayleigh quotient within a neighborhood of the current iterate. This is what gives the RTR superlinear convergence near a solution, as the superlinear term in the residual-based stopping criterion becomes the significant stopping criterion near a solution. In fact, without using a good approximation of the Hessian, it is difficult to achieve fast convergence. However, far from the solution, the quality of the second order terms become less significant, so that an approximate Hessian may be used, such as the Hessian from Tracemin. Furthermore, this Hessian brings with it specific advantages. In choosing the Tracemin Hessian as in (5), it is not necessary to check for directions of negative curvature because the Hessian is (in exact arithmetic) positive definite. Also, any reduction in the quadratic model is known to produce a reduction in the generalized Rayleigh quotient, so that we may ignore the trust-region constraint [ABGS05]. The former requires that the matrix A is positive definite, always assumed when discussing the Tracemin method. These ideas are discussed in more detail in [ABGS05]. 6 Numerical Experiments The choice of a subspace acceleration strategy, along with the options for performing the model minimization, create from SA-RTR a family of algorithms for finding the leftmost eigenvectors and eigenvalues of a symmetric matrix pencil. Some of these family members are tested here. Tests are run on the following systems: A dense matrix A, of order n = 100, and B = I, constructed so that the eigenvalues are linearly spaced, with no gap between the targeted 9

10 and untargeted parts of the spectrum. The spectrum of this matrix is shown in Figure 1. A dense matrix A, of order n = 100, and B = I, constructed so that the eigenvalues are linearly spaced, except for a significant gap between the the targeted and untargeted parts of the spectrum. The spectrum of this matrix is shown in Figure 1. A matrix A representing a Laplacian operator, generated via the MAT- LAB command A = delsq(numgrid( N,30)), of order n = 784. This test is run with no preconditioner, using an incomplete Cholesky generated via cholinc(a, 0 ), and using an exact Cholesky generated via chol(a). All three choices of A are positive definite, so that the eigenvalues of (A,B) are positive as well (this is necessary for a discussion of Trace Minimization). 6.1 Subspace size The first set (Figure 2) of experiments illustrates the benefit of adding more search directions to the method. Here, RTR is run with SS subspace acceleration, with subspace rank 4p, where the model minimization performs 0 steps (returns the preconditioned gradient). This is denoted SS- RTR(4p,0). Here, we compute only a single eigenpair, so p = 1. In this case, adding a single vector to the acceleration subspace basis yields a benefit. Note that the number of vectors in the acceleration subspaces affects storage cost, as well as the cost of the Ritz-Rayleigh process (which solves a dense eigenvalue problem.) However, it does not affect the number of multiplications against A. In the second set (Figure 3), we again compare against SA- RTR(4p,0), this time on the gap-controlled matrices. Note that for the first test, the matrix with a gap, the performance difference between the two algorithms is negligible. For the matrix with no gap, however, the extra search direction yields a noticeable effect. 6.2 Number of inner iterations Figure 4 shows vs. SS-RTR(3p), where the model minimization was allowed either 0, 2 or 4 steps to produce a tangent vector. Note that for the first case (0 steps), SS-RTR(3p,0) performs identically to. 10

11 120 Spectrum for NoGap test, p= Spectrum for Gap test, p= Figure 1: Spectrum of controlled gap problems, without and with gap, respectively. 11

12 Laplacian, n=784, no precon, p=1 Laplacian, n=784, IC(0), p=1 SS RTR(4p,0) SS RTR(4p,0) Laplacian, n=784, EC, p=1 SS RTR(4p,0) Figure 2: SA-RTR(4p,0) vs., for the Laplacian problem, with no preconditioner, incomplete Cholesky, and exact Cholesky, respectively. 12

13 Dense,n=100,gap after 1 SS RTR(4p,0) Dense,n=100,no gap SS RTR(4p,0) Figure 3: SA-RTR(4p,0) vs., for the dense matrices, with a gap and without a gap in the eigenvalues, respectively. 13

14 As more steps are allowed, the performance of SS-RTR overtakes that of. However, this trend does not hold for the problem where a factorization of A was available for use in the model minimization preconditioner. The set in Figure 5 shows the effect of more iterations in the model minimization on the gap-chosen matrices. Similar to Figure 3, the parameters of the algorithm are noticeable only for the matrix with no gap (the harder problem). 6.3 Effect of subspace acceleration In Figure 6, we compare the classical RTR with no subspace acceleration against RTR with 3-dimensional subspace acceleration via Expanding Subspaces (ES) and Sequential Subspaces (SS). The data used is the Laplacian problem, with three different preconditioning methods. Note that in all three case, the subspace accelerated methods outperform the un-accelerated method. Note also that for such a low-dimensional acceleration subspace, there is no significant performance difference between the two acceleration schemes. Figure 7 shows the performance of this set of methods applied to the dense, gap-controlled problems. Similar to the Laplacian problem, there is not a large performance difference between classical RTR and the subspace accelerated RTR methods. 6.4 Residual-based stopping criteria In Figure 8, we compare SS-RTR with two different stopping criteria: the residual-norm-based method proposed in the classical RTR against the residualnorm-based method proposed by Notay in [Not02]. is included as a reference. Again, the test data used here is the Laplacian, with three different preconditioning schemes. Note that in these experiments, the RTR stopping criterion performs at least as well as the stopping criterion suggested by Notay, performing significantly better for the exact Cholesky preconditioned test. Figure 9 tests these methods on the gap-controlled problems. Again, the stopping criterion proposed by Notay serves the method well for the more difficult problem, where there is only a small gap between eigenvalues. However, for the easier problem, where a large gap separates the two parts of the spectrum, the Notay stopping criterion spends too much time on inner iterations. 14

15 Laplacian, n=784, no precon, p=1 SS RTR(3p,0) SS RTR(3p,2) SS RTR(3p,4) Laplacian, n=784, IC(0), p=1 SS RTR(3p,0) SS RTR(3p,2) SS RTR(3p,4) Laplacian, n=784, EC, p=1 SS RTR(3p,0) SS RTR(3p,2) SS RTR(3p,4) Figure 4: SA-RTR(3p) vs., for the Laplacian problem, with varying number of inner steps (0,2,4) allowed for the model minimization, for no preconditioner, incomplete Cholesky, and exact Cholesky, respectively. 15

16 Dense,n=100,gap after 1 SS RTR(3p,0) SS RTR(3p,2) SS RTR(3p,4) Dense,n=100,no gap SS RTR(3p,0) SS RTR(3p,2) SS RTR(3p,4) Figure 5: SA-RTR(3p) vs., for the dense problems, with varying number of inner steps (0,2,4) allowed for the model minimization, with a gap and without a gap, respectively. 16

17 Laplacian, n=784, no precon, p=1 Laplacian, n=784, IC(0), p=1 RTR(nosa) ES RTR(3p,RTR stop) SS RTR(3p,RTR stop) RTR(nosa) ES RTR(3p,RTR stop) SS RTR(3p,RTR stop) Laplacian, n=784, EC, p=1 RTR(nosa) ES RTR(3p,RTR stop) SS RTR(3p,RTR stop) Figure 6: RTR(nosa) vs. ES-RTR(3p) vs. SS-RTR(3p), for the Laplacian problem, for no preconditioner, incomplete Cholesky, and exact Cholesky, respectively. 17

18 Dense,n=100,gap after 1 RTR(nosa) ES RTR(3p,RTR stop) SS RTR(3p,RTR stop) Dense,n=100,no gap RTR(nosa) ES RTR(3p,RTR stop) SS RTR(3p,RTR stop) Figure 7: RTR(nosa) vs. ES-RTR(3p) vs. SS-RTR(3p), for dense problems, with a gap and without a gap, respectively. 18

19 Laplacian, n=784, no precon, p=1 Laplacian, n=784, IC(0), p=1 SS RTR(3p,RTR stop) SS RTR(3p,Notay stop) SS RTR(3p,RTR stop) SS RTR(3p,Notay stop) Laplacian, n=784, EC, p=1 SS RTR(3p,RTR stop) SS RTR(3p,Notay stop) Figure 8: SS-RTR(3p,RTR-stop) vs. SS-RTR(3p,Notay-stop) vs., for the Laplacian problem, for no preconditioner, incomplete Cholesky, and exact Cholesky, respectively. 19

20 Dense,n=100,gap after 1 SS RTR(3p,RTR stop) SS RTR(3p,Notay stop) Dense,n=100,no gap SS RTR(3p,RTR stop) SS RTR(3p,Notay stop) Figure 9: SS-RTR(3p,RTR-stop) vs. SS-RTR(3p,Notay-stop) vs., for dense problems, with a gap and without a gap, respectively. 20

21 6.5 Quality of model Hessian We examine the performance of RTR against that of Tracemin, under different preconditioner scenarios. Also demonstrated here is the adaptive model method, a RTR-Tracemin hybrid (called the Adaptive Trust-Region method) which begins with Tracemin and switches to RTR [ABGS05]. Figure 10 tests these on the Laplacian matrix, targeting the leftmost p = 4 eigenvalues, with different preconditioners: no preconditioning, preconditioning using an incomplete Cholesky of A, and preconditioning using an exact Cholesky of A. Figure 11 tests these methods on the gap-controlled matrices, using either no preconditioning or preconditioning using an exact Cholesky factorization of A. 21

22 Laplacian, n=784, no precon, p=4 Laplacian, n=784, IC(0), p=4 RTR TM ATR RTR TM ATR Laplacian, n=784, EC, p=4 RTR TM ATR Figure 10: Classical RTR vs. Tracemin vs. AR, for the Laplacian problem, p = 4, for no preconditioner, incomplete Cholesky, and exact Cholesky, respectively. 22

23 10 4 Dense,n=100,gap after 5,p= Dense,n=100,gap after 5,p=5,EC RTR TM ATR RTR TM ATR Dense,n=100,no gap,p=5 RTR TM ATR 10 4 Dense,n=100,no gap,p=5,ec RTR TM ATR Figure 11: Classical RTR vs. Tracemin vs. AR, for dense problems, with a gap (first row) and without a gap (second row), without a preconditioner (first column) and with a preconditioner from an exact factorization of A (second column). 23

24 References [ABG04a] P.-A. Absil, C. G. Baker, and K. A. Gallivan, A truncated-cg style method for symmetric generalized eigenvalue problems, submitted, [ABG04b], Trust-region methods on Riemannian manifolds, Tech. Report FSU-CSIT-04-13, School of Computational Science, Florida State University, July [ABG04c] [ABG05], Trust-region methods on Riemannian manifolds with applications in numerical linear algebra, Proceedings of the 16th International Symposium on Mathematical Theory of Networks and Systems (MTNS2004), Leuven, Belgium, 5 9 July 2004, 2004., Trust-region methods on Riemannian manifolds, submitted, March [ABGS05] P.-A. Absil, C. G. Baker, K. A. Gallivan, and A. Sameh, Adaptive model trust region methods for generalized eigenvalue problems, Lecture Notes in Compuer Science, vol. 3514, pp , Springer-Verlag, [Hag01] W. W. Hager, Minimizing a quadratic over a sphere, SIAM J. Optim. 12 (2001), no. 1, (electronic). [Kny01] [Not02] [ST00] [SV96] A. V. Knyazev, Toward the optimal preconditioned eigensolver: locally optimal block preconditioned conjugate gradient method, SIAM J. Sci. Comput. 23 (2001), no. 2, Y. Notay, Combination of Jacobi-Davidson and conjugate gradients for the partial symmetric eigenproblem, Numer. Linear Algebra Appl. 9 (2002), no. 1, A. Sameh and Z. Tong, The trace minimization method for the symmetric generalized eigenvalue problem, J. Comput. Appl. Math. 123 (2000), G. L. G. Sleijpen and H. A. Van der Vorst, A Jacobi-Davidson iteration method for linear eigenvalue problems, SIAM J. Matrix Anal. Appl. 17 (1996), no. 2,

25 [SW82] A. H. Sameh and J. A. Wisniewski, A trace minimization algorithm for the generalized eigenvalue problem, SIAM J. Numer. Anal. 19 (1982), no. 6,

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

Accelerated line-search and trust-region methods

Accelerated line-search and trust-region methods 03 Apr 2008 Tech. Report UCL-INMA-2008.011 Accelerated line-search and trust-region methods P.-A. Absil K. A. Gallivan April 3, 2008 Abstract In numerical optimization, line-search and trust-region methods

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

A Parallel Implementation of the Trace Minimization Eigensolver

A Parallel Implementation of the Trace Minimization Eigensolver A Parallel Implementation of the Trace Minimization Eigensolver Eloy Romero and Jose E. Roman Instituto ITACA, Universidad Politécnica de Valencia, Camino de Vera, s/n, 4622 Valencia, Spain. Tel. +34-963877356,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers

Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers Merico Argentati University of Colorado Denver Joint work with Andrew V. Knyazev, Klaus Neymeyr

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING

STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING SIAM J. MATRIX ANAL. APPL. Vol.?, No.?, pp.?? c 2007 Society for Industrial and Applied Mathematics STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING ANDREW V. KNYAZEV AND ILYA

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University

More information

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005 1 Preconditioned Eigenvalue Solvers for electronic structure calculations Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver Householder

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY KLAUS NEYMEYR ABSTRACT. Multigrid techniques can successfully be applied to mesh eigenvalue problems for elliptic differential operators. They allow

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

State-of-the-art numerical solution of large Hermitian eigenvalue problems. Andreas Stathopoulos

State-of-the-art numerical solution of large Hermitian eigenvalue problems. Andreas Stathopoulos State-of-the-art numerical solution of large Hermitian eigenvalue problems Andreas Stathopoulos Computer Science Department and Computational Sciences Cluster College of William and Mary Acknowledgment:

More information

Convergence analysis of Riemannian trust-region methods

Convergence analysis of Riemannian trust-region methods Convergence analysis of Riemannian trust-region methods P.-A. Absil C. G. Baker K. A. Gallivan Optimization Online technical report Compiled June 19, 2006 Abstract This document is an expanded version

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 1069 1082 c 2006 Society for Industrial and Applied Mathematics INEXACT INVERSE ITERATION WITH VARIABLE SHIFT FOR NONSYMMETRIC GENERALIZED EIGENVALUE PROBLEMS

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. The Jacobi Davidson method is an eigenvalue solver which uses the iterative (and in general inaccurate)

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Is there life after the Lanczos method? What is LOBPCG?

Is there life after the Lanczos method? What is LOBPCG? 1 Is there life after the Lanczos method? What is LOBPCG? Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver SIAM ALA Meeting, July 17,

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems A Tuned Preconditioner for Applied to Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University Park

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

On optimizing the sum of the Rayleigh quotient and the generalized Rayleigh quotient on the unit sphere

On optimizing the sum of the Rayleigh quotient and the generalized Rayleigh quotient on the unit sphere Comput Optim Appl., manuscript No. (will be inserted by the editor) On optimizing the sum of the Rayleigh quotient and the generalized Rayleigh quotient on the unit sphere Lei-Hong Zhang Abstract Given

More information

College of William & Mary Department of Computer Science

College of William & Mary Department of Computer Science Technical Report WM-CS-2005-03 College of William & Mary Department of Computer Science WM-CS-2005-03 Nearly optimal preconditioned methods for hermitian eigenproblems under limited memory. Part I: Seeing

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2) A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method

Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method Andrew Knyazev Department of Mathematics University of Colorado at Denver P.O. Box 173364, Campus Box 170 Denver, CO 80217-3364 Time requested: 45 Andrew.Knyazev@cudenver.edu http://www-math.cudenver.edu/~aknyazev/

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Controlling inner iterations in the Jacobi Davidson method

Controlling inner iterations in the Jacobi Davidson method Controlling inner iterations in the Jacobi Davidson method Michiel E. Hochstenbach and Yvan Notay Service de Métrologie Nucléaire, Université Libre de Bruxelles (C.P. 165/84), 50, Av. F.D. Roosevelt, B-1050

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Finite-choice algorithm optimization in Conjugate Gradients

Finite-choice algorithm optimization in Conjugate Gradients Finite-choice algorithm optimization in Conjugate Gradients Jack Dongarra and Victor Eijkhout January 2003 Abstract We present computational aspects of mathematically equivalent implementations of the

More information

MICHIEL E. HOCHSTENBACH

MICHIEL E. HOCHSTENBACH VARIATIONS ON HARMONIC RAYLEIGH RITZ FOR STANDARD AND GENERALIZED EIGENPROBLEMS MICHIEL E. HOCHSTENBACH Abstract. We present several variations on the harmonic Rayleigh Ritz method. First, we introduce

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G. Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES 1 PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES MARK EMBREE, THOMAS H. GIBSON, KEVIN MENDOZA, AND RONALD B. MORGAN Abstract. fill in abstract Key words. eigenvalues, multiple eigenvalues, Arnoldi,

More information

arxiv: v1 [hep-lat] 2 May 2012

arxiv: v1 [hep-lat] 2 May 2012 A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Upon successful completion of MATH 220, the student will be able to:

Upon successful completion of MATH 220, the student will be able to: MATH 220 Matrices Upon successful completion of MATH 220, the student will be able to: 1. Identify a system of linear equations (or linear system) and describe its solution set 2. Write down the coefficient

More information

REPORTS IN INFORMATICS

REPORTS IN INFORMATICS REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information