CG VERSUS MINRES: AN EMPIRICAL COMPARISON
|
|
- Juniper Norris
- 6 years ago
- Views:
Transcription
1 VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive definite, while the minimum residual method () is typically reserved for indefinite systems. We investigate the sequence of approximate solutions x k generated by each method and suggest that even if A is positive definite, may be preferable to if iterations are to be terminated early. In particular, we show for that the solution norms x k are monotonically increasing when A is positive definite (as was already known for ), and the solution errors x x k are monotonically decreasing. We also show that the backward errors for the iterates x k are monotonically decreasing. Key words. conjugate gradient method, minimum residual method, iterative method, sparse matrix, linear equations,, CR,, Krylov subspace method, trust-region method 1. Introduction. The conjugate gradient method () [9] and the minimum residual method () [16] are both Krylov subspace methods for the iterative solution of symmetric linear equations Ax = b. is commonly used when the matrix A is positive definite, while is generally reserved for indefinite systems [24, p85]. We reexamine this wisdom from the point of view of early termination on positive-definite systems. We assume that the system Ax = b is real with A symmetric positive definite (spd) and of dimension n n. The Lanczos process [11] with starting vector b may be used to generate the n k matrix V k ( ) v 1 v 2... v k and the (k + 1) k Hessenberg tridiagonal matrix T k such that AV k = V k+1 T k for k = 1, 2,..., l and AV l = V l T l for some l n, where the columns of V k form a theoretically orthonormal basis for the kth Krylov subspace K k (A, b) span{b, Ab, A 2 b,..., A k b}, and T l is l l and tridiagonal. Approximate solutions within the kth Krylov subspace may be formed as x k = V k y k for some k-vector y k. As shown in [16], three iterative methods,, and SYMMLQ may be derived by choosing y k appropriately at each iteration. is well defined if A is spd, while and SYMMLQ are stable for any symmetric nonsingular A. As noted by Choi [2], SYMMLQ can form an approximation x k+1 = V k+1 y k+1 in the (k+1)th Krylov subspace when and are forming their approximations x k = V k y k in the kth subspace. It would be of future interest to compare all three methods on spd systems, but for the remainder of this paper we focus on and. With different methods using the same information V k+1 and T k to compute solution estimates x k = V k y k within the same Krylov subspace (for each k), it is commonly thought that the number of iterations required will be similar for each method, and hence should be preferable on spd systems because it requires somewhat fewer floating-point operations per iteration. This view is justified if an accurate solution is required (stopping tolerance τ close to machine precision ɛ). We show that with looser stopping tolerances, is sure to terminate sooner than when the Report SOL Submitted October 19, 211 to SQU Journal for Science. ICME, Stanford University, CA , USA (clfong@stanford.edu). Partially supported by a Stanford Graduate Fellowship. Systems Optimization Laboratory, Department of Management Science and Engineering, Stanford University, CA , USA (saunders@stanford.edu). Partially supported by Office of Naval Research grant N and by the U.S. Army Research Laboratory, through the Army High Performance Computing Research Center, Cooperative Agreement W911NF
2 2 DAVID FONG AND MICHAEL SAUNDERS stopping rule is based on the backward error for x k, and by numerical examples we illustrate that the difference in iteration numbers can be substantial Notation. We study the application of and to real symmetric positive-definite (spd) systems Ax = b. The unique solution is denoted by x. The initial approximate solution is x, and r k b Ax k is the residual vector for an approximation x k within the kth Krylov subspace. For a vector v and matrix A, v and A denote the 2-norm and the Frobenius norm respectively, and A indicates that A is spd. 2. Minimization properties of Krylov subspace methods. With exact arithmetic, the Lanczos process terminates with k = l for some l n. To ensure that the approximations x k = V k y k improve by some measure as k increases toward l, the Krylov solvers minimize some convex function within the expanding Krylov subspaces [8] When A is spd, the quadratic form φ(x) 1 2 xt Ax b T x is bounded below, and its unique minimizer solves Ax = b. A characterization of the iterations is that they minimize the quadratic form within each Krylov subspace [8], [15, 2.4], [25, ]: x C k = V k y C k, where y C k = arg min y φ(v k y). With b = Ax and 2φ(x k ) = x T k Ax k 2x T k Ax, this is equivalent to minimizing the function x x k A (x x k ) T A(x x k ), known as the energy norm of the error, within each Krylov subspace. For some applications, this is a desirable property [19, 22, 1, 15, 25] For nonsingular (and possibly indefinite) systems, the residual norm was used in [16] to characterize the iterations: x M k = V k y M k, where y M k = arg min y b AV k y. (2.1) Thus, minimizes r k within the kth Krylov subspace. This was also an aim of Stiefel s Conjugate Residual method (CR) [21] for spd systems (and of Luenberger s extensions of CR to indefinite systems [13, 14]). Thus, CR and must generate the same iterates on spd systems. We use this connection to prove that x k increases monotonically when is applied to an spd system and CR. The two methods for solving spd systems Ax = b are summarized in Table 2.1. The first two columns are pseudocodes for and CR with iteration number k omitted for clarity; they match our Matlab implementations. Note that q = Ap in both methods, but it is not computed as such in CR. Termination occurs when r = ( ρ = β = ). To prove our main result we need to introduce iteration indices; see column 3 of Table 2.1. Termination occurs when r k = for some index k = l n ( ρ l = β l =, r l = s l = p l = q l = ). Note: This l is the same as the l at which the Lanczos process theoretically terminates for the given A and b. Theorem 2.1. The following properties holds for Algorithm CR: (a) q T i q j = (i j) (b) r T i q j = (i j + 1)
3 VERSUS 3 Table 2.1 Pseudocode for algorithms and CR Initialize x =, r = b ρ = r T r, p = r Repeat q = Ap α = ρ/p T q x x + αp r r αq ρ = ρ, ρ = r T r β = ρ/ ρ p r + βp CR Initialize x =, r = b, s = Ar, ρ = r T s, p = r, q = s Repeat (q = Ap) α = ρ/ q 2 x x + αp r r αq s = Ar ρ = ρ, ρ = r T s β = ρ/ ρ p r + βp q s + βq CR Initialize x =, r = b, s = Ar, ρ = r T s, p = r, q = s For k = 1, 2,... (q k = Ap k ) α k = ρ k / q k 2 x k = x k + α k p k r k = r k α k q k s k = Ar k ρ k = r T k s k β k = ρ k /ρ k p k = r k + β k p k q k = s k + β k q k Proof. Given in [14, Theorem 1]. Theorem 2.2. The following properties holds for Algorithm CR: (a) α i (b) β i (c) p T i q j (d) p T i p j (e) x T i p j (f) r T i p j Proof. (a) Here we use the fact that A is spd. The inequalities are strict until i = l (and r l = ). (b) And again: (c) Case I: i = j Case II: i j = k > ρ i = r T i s i = r T i Ar i (A ) (2.2) α i = ρ i / q i 2 β i = ρ i /ρ i (by (2.2)) p T i q i = p T i Ap i (A ) p T i q j = p T i q i k = ri T q i k + β i p T iq i k = β i p T iq i k (by Thm 2.1 (b)), where β i by (b) and p T i q i k by induction as (i) (i k) = k < k.
4 4 DAVID FONG AND MICHAEL SAUNDERS Case III: j i = k > p T i q j = p T i q i+k = p T i Ap i+k = p T i A(r i+k + β i+k p i+k ) = q T i r i+k + β i+k p T i q i+k = β i+k p T i q i+k (by Thm 2.1 (b)), where β i+k by (b) and p T i q i+k by induction as (i+k) i = k < k. (d) At termination, define P span{p, p 1,..., p l } and Q span{q,..., q l }. By construction, P = span{b, Ab,..., A l b} and Q = span{ab,..., A l b} (since q i = Ap i ). Again by construction, x l P, and since r l = we have b = Ax l b Q. We see that P Q. By Theorem 2.1(a), {q i / q i } l i= forms an orthonormal basis for Q. If we project p i P Q onto this basis, we have p i = l k= p T i q k qk T q q k, k where all coordinates are non-negative from (c). Similarly for any other p j, j < l. Therefore p T i p j for any i, j < l. (e) By construction, x i = x i + α i p i = = i α k p k (x = ) k=1 Therefore x T i p i by (d) and (a). (f) Note that any r i can be expressed as a sum of q i : Thus we have r i = r i+1 + α i+1 q i = = r l + α l q l + + α i+1 q i = α l q l + + α i+1 q i. r T i p j = (α l q l + + α i+1 q i ) T p j, where the inequality follows from (a) and (c). We are now able to prove our main theorem about the monotonic increase of x k for CR and. A similar result was proved for by Steihaug [19]. We know that x k is also monotonic for LSQR [17]. Theorem 2.3. For CR (and hence ) on an spd system Ax = b, x k increases monotonically. Proof. x i 2 x i 2 = 2α i x T i p i + p T i p i, where the last inequality follows from Theorem 2.2 (a), (d) and (e). Therefore x i x i. Theorem 2.4. For CR (and hence ) on an spd system Ax = b, the error x x k decreases monotonically.
5 VERSUS 5 Proof. From the update rule for x k, we can express the final solution x l = x as x l = x l + α l p l = = x k + α k+1 p k + + α l p l = x k + α k p k + α k+1 p k + + α l p l. Using the last two equalities above, we can write x l x k 2 x l x k 2 = (x l x k ) T (x l x k ) (x l x k ) T (x l x k ) = 2α k p T k(α k+1 p k + + α l p l ) + α 2 kp T kp k, where the last inequality follows from Theorem 2.2 (a), (d). 3. Backward error analysis. For many physical problems requiring numerical solution, we are given inexact or uncertain input data (in this case A and/or b). It is not justifiable to seek a solution beyond the accuracy of the data [6]. Instead, it is more reasonable to stop an iterative solver once we know that the current approximate solution solves a nearby problem. The measure of nearby should match the error in the input data. The design of such stopping rules is an important application of backward error analysis. For a consistent linear system Ax = b, we think of x k coming from the kth iteration of one of the iterative solvers. Following Titley-Péloquin [23] we say that x k is an acceptable solution if and only if there exist perturbations E and f satisfying (A + E)x k = b + f, E A α, f b β (3.1) for some tolerances α, β that reflect the (preferably known) accuracy of the data. We are naturally interested in minimizing the size of E and f. If we define the optimization problem min ξ s.t. (A + E)x k = b + f, ξ,e,f E A αξ, f b βξ to have optimal solution ξ k, E k, f k (all functions of x k, α, and β), we see that x k is an acceptable solution if and only if ξ k 1. We call ξ k the normwise relative backward error (NRBE) for x k. With r k = b Ax k, the optimal solution ξ k, E k, f k is shown in [23] to be β b φ k = α A x k + β b, E k = (1 φ k) x k 2 r kx T k, (3.2) r k ξ k = α A x k + β b, f k = φ k r k. (3.3) (See [1, p12] for the case β = and [1, 7.1 and p336] for the case α = β.) 3.1. Stopping rule. For general tolerances α and β, the condition ξ k 1 for x k to be an acceptable solution becomes r k α A x k + β b, (3.4) the stopping rule used in LSQR for consistent systems [17, p54, rule S1].
6 6 DAVID FONG AND MICHAEL SAUNDERS 3.2. Monotonic backward errors. Of interest is the size of the perturbations to A and b for which x k is an exact solution of Ax = b. From (3.2) (3.3), the perturbations have the following norms: E k = (1 φ k ) r k x k = f k = φ k r k = α A r k α A x k + β b, (3.5) β b r k α A x k + β b. (3.6) Since x k is monotonically increasing for and, we see from (3.2) that φ k is monotonically decreasing for both solvers. Since r k is monotonically decreasing for (but not for ), we have the following result. Theorem 3.1. Suppose α > and β > in (3.1). For CR and (but not ), the relative backward errors E k / A and f k / b decrease monotonically. Proof. This follows from (3.5) (3.6) with x k increasing for both solvers and r k decreasing for CR and but not for. 4. Numerical results. Here we compare the convergence of and on various spd systems Ax = b and some associated indefinite systems (A δi)x = b. The test examples are drawn from the University of Florida Sparse Matrix Collection (Davis [5]). We experimented with all 26 cases for which A is real spd and b is supplied. Since A is spd, we apply diagonal preconditioning by redefining A and b as follows: d = diag(a), D = diag(1./sqrt(d)), A DAD, b Db, b b/ b. Thus in the figures below we have diag(a) = I and b = 1. The stopping rule used for and was (3.4) with α = and β = 1 (that is, r k 1 b = 1 ), but with a maximum of n iterations Positive-definite systems. In defining backward errors, we assume for simplicity that α > and β = in (3.1) (3.3), even though it doesn t match the choice of α and β in the stopping rule (3.4). This gives φ k = and E k = r k / x k in (3.5). Thus, as in Theorem 3.1, we expect E k to decrease monotonically for CR and but not for. Figure 4.1 shows the backward errors for four representative examples. We see that the E k values converge smoothly for while they fluctuate for. For every k, the backward error is smaller than that of, so that can always be stopped earlier than. These figures illustrate our belief that should be considered for spd systems when high accuracy is not required. Figure 4.2 shows r k and x k for and on two typical spd examples. We see that x k is monotonically increasing for both solvers, and the x k values rise fairly rapidly to their limiting value x, with a moderate delay for. Figure 4.3 shows r k and x k for and on two spd examples in which the residual decrease and the solution norm increase are somewhat slower than typical. The rise of x k for is rather more delayed. In the second case, if the stopping tolerance were β = 1 rather than β = 1, the final x k (k 1) would be less than half the exact value x. It will be of future interest to evaluate this effect within the context of trust-region methods for optimization.
7 VERSUS 7 2 Name:Simon_raefsky4, Dim:19779x19779, nnz: , id=7 Name:Cannizzo_sts498, Dim:498x498, nnz:72356, id=13 log( r /) log( r /) log( r /) x 1 4 Name:Schenk_AFE_af_shell8, Dim:54855x54855, nnz: , id= log( r /) Name:BenElechi_BenElechi1, Dim:245874x245874, nnz: , id= x 1 4 Fig Comparison of backward errors for and solving four spd systems Ax = b with n = 19779, 498, 54855, and The values of log 1 ( r k / x k ) are plotted against iteration number k. These values define log 1 ( E k ) when the stopping tolerances in (3.4) are α > and β =. Upper left: The backward error converges monotonically while the backward error oscillates toward convergence. Upper right: and both converge reasonably well, with staying ahead by 2 5 iterations. Lower left: converges at the same speed as until near convergence, but slows down and takes twice as many iterations as to converge. Lower right: stays ahead of by two orders of magnitude during most iterations.
8 8 DAVID FONG AND MICHAEL SAUNDERS 2 Name:Simon_olafu, Dim:16146x16146, nnz:115156, id=6 1 Name:Cannizzo_sts498, Dim:498x498, nnz:72356, id= x x 15 Name:Simon_olafu, Dim:16146x16146, nnz:115156, id= Name:Cannizzo_sts498, Dim:498x498, nnz:72356, id= x Fig Comparison of residual and solution norms for and solving two spd systems Ax = b with n = and 498. These are typical examples. Top: The values of log 1 r k are plotted against iteration number k. Bottom: The values of x k are plotted against k. The solution norms grow somewhat faster for than for. Both reach the limiting value x significantly before x k is close to x.
9 VERSUS 9 Name:Schmid_thermal1, Dim:82654x82654, nnz:574458, id=14 Name:BenElechi_BenElechi1, Dim:245874x245874, nnz: , id= Name:Schmid_thermal1, Dim:82654x82654, nnz:574458, id= x 1 4 Name:BenElechi_BenElechi1, Dim:245874x245874, nnz: , id= x 1 4 Fig Comparison of residual and solution norms for and solving two spd systems Ax = b with n = and Sometimes the solution norms take longer to reach the limiting value x. Top: The values of log 1 r k are plotted against iteration number k. Bottom: The values of x k are plotted against k. Again the solution norms grow faster for.
10 1 DAVID FONG AND MICHAEL SAUNDERS x k r k / x k Iteration number Iteration number Fig For on the indefinite problem (4.1), x k and the backward error r k / x k are both slightly non-monotonic Indefinite systems. A key part of Steihaug s trust-region method for large-scale unconstrained optimization [19] (see also [4]) is his proof that when is applied to a symmetric (possibly indefinite) system Ax = b, the solution norms x 1,..., x k are strictly increasing as long as p T j Ap j > for all iterations 1 j k. (We are using the notation in Table 2.1.) From our proof of Theorem 2.2, we see that the same property holds for CR and as long as both p T j Ap j > and rj TAr j > for all iterations 1 j k. In case future research finds that is a useful solver in the trust-region context, it is of interest now to offer some empirical results about the behavior of x k when is applied to indefinite systems. First, on the nonsingular indefinite system x = , (4.1) gives non-monotonic solution norms, as shown in the left plot of Figure 4.4. The decrease in x k implies that the backward errors r k / x k may not be monotonic, as illustrated in the right plot. More generally, we can gain an impression of the behavior of x k by recalling from Choi et al. [3] the connection between and -QLP. Both methods compute the iterates x M k = V kyk M in (2.1) from the subproblems y M k = arg min y R k T k y β 1 e 1 and possibly T l y M l = β 1 e 1. When A is nonsingular or Ax = b is consistent (which we now assume), yk M is uniquely defined for each k l and the methods compute the same iterates x M k (but by different numerical methods). In fact they both compute the expanding QR factorizations [ ] [ ] Rk t Q k Tk β 1 e 1 = k, φ k (with R k upper tridiagonal) and -QLP also computes the orthogonal factorizations R k P k = L k (with L k lower tridiagonal), from which the kth solution estimate is defined by W k = V k P k, L k u k = t k, and x M k = W ku k. As shown in [3, 5.3], the
11 VERSUS 11 Simon_olafu, Dim:16146x16146, nnz:115156, id=32 Cannizzo_sts498, Dim:498x498, nnz:72356, id= Simon_olafu, Dim:16146x16146, nnz:115156, id= Cannizzo_sts498, Dim:498x498, nnz:72356, id= Fig Residual norms and solution norms when is applied to two indefinite systems (A δi)x = b, where A is the spd matrices used in Figure 4.2 (n = and 498) and δ =.5 is large enough to make the systems indefinite. Top: The values of log 1 r k are plotted against iteration number k for the first n iterations. Bottom left: The values of x k are plotted against k. During the n = iterations, x k increased 83% of the time and the backward errors r k / x k (not shown here) decreased 96% of the time. Bottom right: During the n = 498 iterations, x k increased 9% of the time and the backward errors r k / x k (not shown here) decreased 98% of the time. construction of these quantities is such that the first k 3 columns of W k are the same as in W k, and the first k 3 elements of u k are the same as in u k. Since W k has orthonormal columns, x M k = u k, where the first k 2 elements of u k are unaltered by later iterations. As shown in [3, 6.5], it means that certain quantities can be cheaply updated to give norm estimates in the form χ 2 χ 2 + ˆµ 2 k, x M k 2 = χ 2 + µ 2 k + µ 2 k, where it is clear that χ 2 increases monotonically. Although the last two terms are of unpredictable size, x M k 2 tends to be dominated by the monotonic term χ 2 and we can expect that x M k will be approximately monotonic as k increases from 1 to l. Experimentally we find that for most iterations on an indefinite problem, x k does increase. To obtain indefinite examples that were sensibly scaled, we used the four spd (A, b) cases in Figures , applied diagonal scaling as before, and solved (A δi)x = b with δ =.5 and where A and b are now scaled (so that
12 12 DAVID FONG AND MICHAEL SAUNDERS Schmid_thermal1, Dim:82654x82654, nnz:574458, id=4 BenElechi_BenElechi1, Dim:245874x245874, nnz: , id= x x Schmid_thermal1, Dim:82654x82654, nnz:574458, id=4 4 BenElechi_BenElechi1, Dim:245874x245874, nnz: , id= x x 1 5 Fig Residual norms and solution norms when is applied to two indefinite systems (A δi)x = b, where A is the spd matrices used in Figure 4.3 (n = and ) and δ =.5 is large enough to make the systems indefinite. Top: The values of log 1 r k are plotted against iteration number k for the first n iterations. Bottom left: The values of x k are plotted against k. There is a mild but clear decrease in x k over an interval of about 1 iterations. During the n = iterations, x k increased 83% of the time and the backward errors r k / x k (not shown here) decreased 91% of the time. Bottom right: The solution norms and backward errors are essentially monotonic. During the n = iterations, x k increased 88% of the time and the backward errors r k / x k (not shown here) decreased 95% of the time. diag(a) = I). The number of iterations increased significantly but was limited to n. Figure 4.5 shows log 1 r k and x k for the first two cases (where A is the spd matrices in Figure 4.2). The values of x k are essentially monotonic. The backward errors r k / x k (not shown) were even closer to being monotonic. Figure 4.6 shows x k and log 1 r k for the second two cases (where A is the spd matrices in Figure 4.3). The left example reveals a definite period of decrease in x k. Nevertheless, during the n = iterations, x k increased 83% of the time and the backward errors r k / x k decreased 91% of the time. The right example is more like those in Figure 4.5. During n = iterations, x k increased 83% of the time, the backward errors r k / x k decreased 91% of the time, and any nonmonotonicity was very slight.
13 VERSUS Conclusions. For full-rank least-squares problems min Ax b, the solvers LSQR [17, 18] and LSMR [7, 12] are equivalent to and on the (spd) normal equation A T Ax = A T b. Comparisons in [7] already indicated that LSMR can often stop much sooner than LSQR when Stewart s backward error norm for least-squares problems ( A T r k / r k [2]) is used in both cases. Our theoretical and experimental results here provide analogous evidence that can often stop much sooner than on spd systems when the stopping rule is based on backward error norms r k / x k (or the more general norms in (3.5) (3.6)). By definition, minimizes r k in the kth Krylov subspace, so that r k decreases monotonically. Theorem 2.2 shows that shares a known property of : that x k increases monotonically when A is spd. This implies that x k is monotonic for LSMR (as conjectured in [7]), and suggests that may be a useful alternative to in the context of trust-region methods for optimization. Finally, while the energy norms of the errors ( x x k A ) are known to be monotonic, Theorem 2.4 shows that for on spd systems the error norms x x k are monotonic. Acknowledgements. We kindly thank Professor Mehiddin Al-Baali and other colleagues for organizing the Second International Conference on Numerical Analysis and Optimization at Sultan Qaboos University (January 3 6, 211, Muscat, Sultanate of Oman). Their wish to publish some of the conference papers in a special issue of SQU Journal for Science gave added motivation for this research. We also thank Dr Sou-Cheng Choi for many helpful discussions of the iterative solvers, and Michael Friedlander for a final welcome tip. REFERENCES [1] M. Arioli, A stopping criterion for the conjugate gradient algorithm in a finite element method framework, Numer. Math., 97 (24), pp [2] Sou-Cheng Choi, Iterative Methods for Singular Linear Equations and Least-Squares Problems, PhD thesis, Stanford University, Stanford, CA, December 26. [3] S.-C. Choi, C. C. Paige, and M. A. Saunders, -QLP: a Krylov subspace method for indefinite or singular symmetric systems, SIAM J. Sci. Comput., 33 (211), pp [4] A. R. Conn, N. I. M. Gould, and Ph. L. Toint, Trust-region Methods, vol. 1, SIAM, Philadelphia, 2. [5] T. A. Davis, University of Florida Sparse Matrix Collection. research/sparse/matrices. [6] J. J. Dongarra, J. R. Bunch, C. B. Moler, and G. W. Stewart, LINPACK Users Guide, SIAM, Philadelphia, [7] D. C.-L. Fong and M. A. Saunders, LSMR: An iterative algorithm for sparse least-squares problems, SIAM J. Sci. Comput., to appear (211). [8] R. W. Freund, G. H. Golub, and N. M. Nachtigal, Iterative solution of linear systems, Acta Numerica, 1 (1992), pp [9] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952), pp [1] N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, Philadelphia, second ed., 22. [11] C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, J. Res. Nat. Bur. Standards, 45 (195), pp [12] LSMR software for linear systems and least squares. software.html. [13] D. G. Luenberger, Hyperbolic pairs in the method of conjugate gradients, SIAM J. Appl. Math., 17 (1969), pp [14], The conjugate residual method for constrained minimization problems, SIAM J. Numer. Anal., 7 (197), pp [15] G. A. Meurant, The Lanczos and Conjugate Gradient Algorithms: From Theory to Finite
14 14 DAVID FONG AND MICHAEL SAUNDERS Precision Computations, vol. 19 of Software, Environments, and Tools, SIAM, Philadelphia, 26. [16] C. C. Paige and M. A. Saunders, Solution of sparse indefinite systems of linear equations, SIAM J. Numer. Anal., 12 (1975), pp [17], LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Softw., 8 (1982), pp [18], Algorithm 583; LSQR: Sparse linear equations and least-squares problems, ACM Trans. Math. Softw., 8 (1982), pp [19] T. Steihaug, The conjugate gradient method and trust regions in large scale optimization, SIAM J. Numer. Anal., 2 (1983), pp [2] G. W. Stewart, Research, development and LINPACK, in Mathematical Software III, J. R. Rice, ed., Academic Press, New York, 1977, pp [21] E. Stiefel, Relaxationsmethoden bester strategie zur lösung linearer gleichungssysteme, Comm. Math. Helv., 29 (1955), pp [22] Yong Sun, The Filter Algorithm for Solving Large-Scale Eigenproblems from Accelerator Simulations, PhD thesis, Stanford University, Stanford, CA, March 23. [23] David Titley-Peloquin, Backward Perturbation Analysis of Least Squares Problems, PhD thesis, School of Computer Science, McGill University, 21. [24] Henk A. van der Vorst, Iterative Krylov Methods for Large Linear Systems, Cambridge University Press, Cambridge, first ed., 23. [25] David S. Watkins, Fundamentals of Matrix Computations, Pure and Applied Mathematics, Wiley, Hoboken, NJ, third ed., 21.
CG VERSUS MINRES: AN EMPIRICAL COMPARISON
VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive
More informationCG and MINRES: An empirical comparison
School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering
More informationCG versus MINRES on Positive Definite Systems
School of Engineering CG versus MINRES on Positive Definite Systems Michael Saunders SOL and ICME, Stanford University Joint work with David Fong Recent Advances on Optimization In honor of Philippe Toint
More informationExperiments with iterative computation of search directions within interior methods for constrained optimization
Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationRESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS
RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationAMS subject classifications. 15A06, 65F10, 65F20, 65F22, 65F25, 65F35, 65F50, 93E24
: AN IERAIVE ALGORIHM FOR SPARSE LEAS-SQUARES PROBLEMS DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. An iterative method is presented for solving linear systems Ax = b and leastsquares problems min
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationContribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa
Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006
More informationSolving large sparse Ax = b.
Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf
More informationGeneralized MINRES or Generalized LSQR?
Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationLSMR: An iterative algorithm for least-squares problems
LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative
More informationSome minimization problems
Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of
More informationConjugate Gradient algorithm. Storage: fixed, independent of number of steps.
Conjugate Gradient algorithm Need: A symmetric positive definite; Cost: 1 matrix-vector product per step; Storage: fixed, independent of number of steps. The CG method minimizes the A norm of the error,
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationOn fast trust region methods for quadratic models with linear constraints. M.J.D. Powell
DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationLanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationOn Lagrange multipliers of trust region subproblems
On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline
More informationNearest Correlation Matrix
Nearest Correlation Matrix The NAG Library has a range of functionality in the area of computing the nearest correlation matrix. In this article we take a look at nearest correlation matrix problems, giving
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationA general Krylov method for solving symmetric systems of linear equations
A general Krylov method for solving symmetric systems of linear equations Anders FORSGREN Tove ODLAND Technical Report TRITA-MAT-214-OS-1 Department of Mathematics KTH Royal Institute of Technology March
More informationOn the accuracy of saddle point solvers
On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationOn Lagrange multipliers of trust-region subproblems
On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationThe Conjugate Gradient Method for Solving Linear Systems of Equations
The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationConjugate Gradient Tutorial
Conjugate Gradient Tutorial Prof. Chung-Kuan Cheng Computer Science and Engineering Department University of California, San Diego ckcheng@ucsd.edu December 1, 2015 Prof. Chung-Kuan Cheng (UC San Diego)
More informationKey words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR
POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric
More informationReduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves
Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationDo You Trust Derivatives or Differences?
ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Do You Trust Derivatives or Differences? Jorge J. Moré and Stefan M. Wild Mathematics and Computer Science Division Preprint ANL/MCS-P2067-0312
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationA Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification
JMLR: Workshop and Conference Proceedings 1 16 A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification Chih-Yang Hsia r04922021@ntu.edu.tw Dept. of Computer Science,
More informationAn interior-point method for large constrained discrete ill-posed problems
An interior-point method for large constrained discrete ill-posed problems S. Morigi a, L. Reichel b,,1, F. Sgallari c,2 a Dipartimento di Matematica, Università degli Studi di Bologna, Piazza Porta S.
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationFinite-choice algorithm optimization in Conjugate Gradients
Finite-choice algorithm optimization in Conjugate Gradients Jack Dongarra and Victor Eijkhout January 2003 Abstract We present computational aspects of mathematically equivalent implementations of the
More informationOn the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012
On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationApproximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method
Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationInexactness and flexibility in linear Krylov solvers
Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationOn the Modification of an Eigenvalue Problem that Preserves an Eigenspace
Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov
More informationTikhonov Regularization of Large Symmetric Problems
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi
More informationReduced-Hessian Methods for Constrained Optimization
Reduced-Hessian Methods for Constrained Optimization Philip E. Gill University of California, San Diego Joint work with: Michael Ferry & Elizabeth Wong 11th US & Mexico Workshop on Optimization and its
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationResearch Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems
Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationAN ITERATIVE METHOD WITH ERROR ESTIMATORS
AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate
More informationGradient Method Based on Roots of A
Journal of Scientific Computing, Vol. 15, No. 4, 2000 Solving Ax Using a Modified Conjugate Gradient Method Based on Roots of A Paul F. Fischer 1 and Sigal Gottlieb 2 Received January 23, 2001; accepted
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationNon-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error
on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationOn the loss of orthogonality in the Gram-Schmidt orthogonalization process
CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationAnalysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL
More informationGolub-Kahan iterative bidiagonalization and determining the noise level in the data
Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationINCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES
INCOMPLEE FACORIZAION CONSRAIN PRECONDIIONERS FOR SADDLE-POIN MARICES H. S. DOLLAR AND A. J. WAHEN Abstract. We consider the application of the conjugate gradient method to the solution of large symmetric,
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationCOMPUTING PROJECTIONS WITH LSQR*
BIT 37:1(1997), 96-104. COMPUTING PROJECTIONS WITH LSQR* MICHAEL A. SAUNDERS t Abstract. Systems Optimization Laboratory, Department of EES ~4 OR Stanford University, Stanford, CA 94305-4023, USA. email:
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationOn the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationComputing least squares condition numbers on hybrid multicore/gpu systems
Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning
More informationResidual iterative schemes for largescale linear systems
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz
More informationPERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES
1 PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES MARK EMBREE, THOMAS H. GIBSON, KEVIN MENDOZA, AND RONALD B. MORGAN Abstract. fill in abstract Key words. eigenvalues, multiple eigenvalues, Arnoldi,
More informationEfficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method
Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Zdeněk Strakoš and Petr Tichý Institute of Computer Science AS CR, Technical University of Berlin. Emory
More informationConjugate Gradient Method
Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationUpdating the QR decomposition of block tridiagonal and block Hessenberg matrices generated by block Krylov space methods
Updating the QR decomposition of block tridiagonal and block Hessenberg matrices generated by block Krylov space methods Martin H. Gutknecht Seminar for Applied Mathematics, ETH Zurich, ETH-Zentrum HG,
More information