CG VERSUS MINRES: AN EMPIRICAL COMPARISON

Similar documents
CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG versus MINRES on Positive Definite Systems

CG and MINRES: An empirical comparison

Experiments with iterative computation of search directions within interior methods for constrained optimization

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

The Lanczos and conjugate gradient algorithms

Block Bidiagonal Decomposition and Least Squares Problems

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

AMS subject classifications. 15A06, 65F10, 65F20, 65F22, 65F25, 65F35, 65F50, 93E24

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Solving large sparse Ax = b.

Generalized MINRES or Generalized LSQR?

1 Conjugate gradients

LSMR: An iterative algorithm for least-squares problems

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Some minimization problems

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps.

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

Nearest Correlation Matrix

MS&E 318 (CME 338) Large-Scale Numerical Optimization

On the accuracy of saddle point solvers

6.4 Krylov Subspaces and Conjugate Gradients

Total least squares. Gérard MEURANT. October, 2008

A general Krylov method for solving symmetric systems of linear equations

Numerical behavior of inexact linear solvers

APPLIED NUMERICAL LINEAR ALGEBRA

On Lagrange multipliers of trust region subproblems

Numerical Methods in Matrix Computations

Notes on Some Methods for Solving Linear Systems

DELFT UNIVERSITY OF TECHNOLOGY

On Lagrange multipliers of trust-region subproblems

The Conjugate Gradient Method for Solving Linear Systems of Equations

Introduction to Iterative Solvers of Linear Systems

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Do You Trust Derivatives or Differences?

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

EECS 275 Matrix Computation

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES

COMPUTING PROJECTIONS WITH LSQR*

Large-scale eigenvalue problems

An interior-point method for large constrained discrete ill-posed problems

FEM and sparse linear system solving

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Gradient Method Based on Roots of A

Conjugate Gradient Tutorial

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

Tikhonov Regularization of Large Symmetric Problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

F as a function of the approximate solution x

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the influence of eigenvalues on Bi-CG residual norms

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Reduced-Hessian Methods for Constrained Optimization

Inexactness and flexibility in linear Krylov solvers

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

Computing least squares condition numbers on hybrid multicore/gpu systems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Finite-choice algorithm optimization in Conjugate Gradients

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

Chapter 7 Iterative Techniques in Matrix Algebra

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

Iterative Methods for Solving A x = b

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

Introduction. Chapter One

DELFT UNIVERSITY OF TECHNOLOGY

Course Notes: Week 1

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

On the Preconditioning of the Block Tridiagonal Linear System of Equations

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Preconditioning Techniques Analysis for CG Method

Arnoldi Methods in SLEPc

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Computational Linear Algebra

Transcription:

VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive definite, while the minimum residual method () is typically reserved for indefinite systems. We investigate the sequence of approximate solutions x k generated by each method and suggest that even if A is positive definite, may be preferable to if iterations are to be terminated early. In particular, we show for that the solution norms x k are monotonically increasing when A is positive definite (as was already known for ), and the solution errors x x k are monotonically decreasing. We also show that the backward errors for the iterates x k are monotonically decreasing. Key words. conjugate gradient method, minimum residual method, iterative method, sparse matrix, linear equations,, CR,, Krylov subspace method, trust-region method. Introduction. The conjugate gradient method () [] and the minimum residual method () [8] are both Krylov subspace methods for the iterative solution of symmetric linear equations Ax = b. is commonly used when the matrix A is positive definite, while is generally reserved for indefinite systems [7, p85]. We reexamine this wisdom from the point of view of early termination on positive-definite systems. We assume that the system Ax = b is real with A symmetric positive definite (spd) and of dimension n n. The Lanczos process [3] with starting vector b may be used to generate the n k matrix V k ( ) v v... v k and the (k + ) k Hessenberg tridiagonal matrix T k such that AV k = V k+ T k for k =,,..., l and AV l = V l T l for some l n, where the columns of V k form a theoretically orthonormal basis for the kth Krylov subspace K k (A, b) span{b, Ab, A b,..., A k b}, and T l is l l and tridiagonal. Approximate solutions within the kth Krylov subspace may be formed as x k = V k y k for some k-vector y k. As shown in [8], three iterative methods,, and SYMMLQ may be derived by choosing y k appropriately at each iteration. is well defined if A is spd, while and SYMMLQ are stable for any symmetric nonsingular A. As noted by Choi [], SYMMLQ can form an approximation x k+ = V k+ y k+ in the (k+)th Krylov subspace when and are forming their approximations x k = V k y k in the kth subspace. It would be of future interest to compare all three methods on spd systems, but for the remainder of this paper we focus on and. With different methods using the same information V k+ and T k to compute solution estimates x k = V k y k within the same Krylov subspace (for each k), it is commonly thought that the number of iterations required will be similar for each method, and hence should be preferable on spd systems because it requires somewhat fewer floating-point operations per iteration. This view is justified if an accurate solution is required (stopping tolerance τ close to machine precision ɛ). We show that with Report SOL -R (to appear in SQU Journal for Science, 7: (), 44 6). ICME, Stanford University, CA 9435-44, USA (david3.459@gmail.com). Partially supported by a Stanford Graduate Fellowship. Systems Optimization Laboratory, Department of Management Science and Engineering, Stanford University, CA 9435-46, USA (saunders@stanford.edu). Partially supported by Office of Naval Research grant N4-8--9 and NSF grant DMS-95, and by the U.S. Army Research Laboratory, through the Army High Performance Computing Research Center, Cooperative Agreement W9NF-7-7.

DAVID FONG AND MICHAEL SAUNDERS looser stopping tolerances, is sure to terminate sooner than when the stopping rule is based on the backward error for x k, and by numerical examples we illustrate that the difference in iteration numbers can be substantial... Notation. We study the application of and to real symmetric positive-definite (spd) systems Ax = b. The unique solution is denoted by x. The initial approximate solution is x, and r k b Ax k is the residual vector for an approximation x k within the kth Krylov subspace. For a vector v and matrix A, v and A denote the -norm and the Frobenius norm respectively, and A indicates that A is spd.. Minimization properties of Krylov subspace methods. With exact arithmetic, the Lanczos process terminates with k = l for some l n. To ensure that the approximations x k = V k y k improve by some measure as k increases toward l, the Krylov solvers minimize some convex function within the expanding Krylov subspaces [9].... When A is spd, the quadratic form φ(x) xt Ax b T x is bounded below, and its unique minimizer solves Ax = b. A characterization of the iterations is that they minimize the quadratic form within each Krylov subspace [9], [7,.4], [8, 8.8 8.9]: x C k = V k y C k, where y C k = arg min y φ(v k y). With b = Ax and φ(x k ) = x T k Ax k x T k Ax, this is equivalent to minimizing the function x x k A (x x k ) T A(x x k ), known as the energy norm of the error, within each Krylov subspace. For some applications, this is a desirable property [, 4,, 7, 8].... For nonsingular (and possibly indefinite) systems, the residual norm was used in [8] to characterize the iterations: x M k = V k y M k, where y M k = arg min y b AV k y. (.) Thus, minimizes r k within the kth Krylov subspace. This was also an aim of Stiefel s Conjugate Residual method (CR) [3] for spd systems (and of Luenberger s extensions of CR to indefinite systems [5, 6]). Thus, CR and must generate the same iterates on spd systems. We use this connection to prove that x k increases monotonically when is applied to an spd system..3. and CR. The two methods for solving spd systems Ax = b are summarized in Table.. The first two columns are pseudocodes for and CR with iteration number k omitted for clarity; they match our Matlab implementations. Note that q = Ap in both methods, but it is not computed as such in CR. Termination occurs when r = ( ρ = β = ). To prove our main result we need to introduce iteration indices; see column 3 of Table.. Termination occurs when r k = for some index k = l n ( ρ l = β l =, r l = s l = p l = q l = ). Note: This l is the same as the l at which the Lanczos process theoretically terminates for the given A and b. Theorem.. The following properties hold for Algorithm CR: (a) q T i q j = (i j) (b) r T i q j = (i j + )

VERSUS 3 Table. Pseudocode for algorithms and CR Initialize x =, r = b ρ = r, p = r Repeat q = Ap α = ρ/p T q x x + αp r r αq ρ = ρ, ρ = r T r β = ρ/ ρ p r + βp CR Initialize x =, r = b, s = Ar, ρ = r T s, p = r, q = s Repeat (q = Ap) α = ρ/ q x x + αp r r αq s = Ar ρ = ρ, ρ = r T s β = ρ/ ρ p r + βp q s + βq CR Initialize x =, r = b, s = Ar, ρ = r T s, p = r, q = s For k =,,... (q k = Ap k ) α k = ρ k / q k x k = x k + α k p k r k = r k α k q k s k = Ar k ρ k = r T k s k β k = ρ k /ρ k p k = r k + β k p k q k = s k + β k q k Proof. Given in [6, Theorem ]. Theorem.. The following properties hold for Algorithm CR: (a) α i (b) β i (c) p T i q j (d) p T i p j (e) x T i p j (f) r T i p j Proof. (a) Here we use the fact that A is spd. The inequalities are strict until i = l (and r l = ). (b) And again: (c) Case I: i = j Case II: i j = k > ρ i = r T i s i = r T i Ar i (A ) (.) α i = ρ i / q i β i = ρ i /ρ i (by (.)) p T i q i = p T i Ap i (A ) p T i q j = p T i q i k = ri T q i k + β i p T iq i k = β i p T iq i k (by Thm. (b)), where β i by (b) and p T i q i k by induction as (i) (i k) = k < k.

4 DAVID FONG AND MICHAEL SAUNDERS Case III: j i = k > p T i q j = p T i q i+k = p T i Ap i+k = p T i A(r i+k + β i+k p i+k ) = q T i r i+k + β i+k p T i q i+k = β i+k p T i q i+k (by Thm. (b)), where β i+k by (b) and p T i q i+k by induction as (i+k) i = k < k. (d) At termination, define P span{p, p,..., p l } and Q span{q,..., q l }. By construction, P = span{b, Ab,..., A l b} and Q = span{ab,..., A l b} (since q i = Ap i ). Again by construction, x l P, and since r l = we have b = Ax l b Q. We see that P Q. By Theorem.(a), {q i / q i } l i= forms an orthonormal basis for Q. If we project p i P Q onto this basis, we have p i = l k= p T i q k qk T q q k, k where all coordinates are non-negative from (c). Similarly for any other p j, j < l. Therefore p T i p j for any i, j < l. (e) By construction, x i = x i + α i p i = = i α k p k (x = ) k= Therefore x T i p i by (d) and (a). (f) Note that any r i can be expressed as a sum of q i : Thus we have r i = r i+ + α i+ q i = = r l + α l q l + + α i+ q i = α l q l + + α i+ q i. r T i p j = (α l q l + + α i+ q i ) T p j, where the inequality follows from (a) and (c). We are now able to prove our main theorem about the monotonic increase of x k for CR and. A similar result was proved for by Steihaug []. Theorem.3. For CR (and hence ) on an spd system Ax = b, x k increases monotonically. Proof. x i x i = α i x T i p i + p T i p i, where the last inequality follows from Theorem. (a), (d) and (e). Therefore x i x i. x x k is known to be monotonic for []. The corresponding result for CR is a direct consequence of [, Thm 7:5]. However, the second half of that theorem,

VERSUS 5 x x C k > x x M k, rarely holds in machine arithmetic. We give here an alternative proof that does not depend on. Theorem.4. For CR (and hence ) on an spd system Ax = b, the error x x k decreases monotonically. Proof. From the update rule for x k, we can express the final solution x l = x as x l = x l + α l p l = = x k + α k+ p k + + α l p l (.3) = x k + α k p k + α k+ p k + + α l p l. (.4) Using the last two equalities above, we can write x l x k x l x k = (x l x k ) T (x l x k ) (x l x k ) T (x l x k ) = α k p T k(α k+ p k + + α l p l ) + α kp T kp k, where the last inequality follows from Theorem. (a), (d). The energy norm error x x k A is monotonic for by design. The corresponding result for CR is given in [, Thm 7:4]. We give an alternative proof here. Theorem.5. For CR (and hence ) on an spd system Ax = b, the error in energy norm x x k A is strictly decreasing. Proof. From (.3) and (.4), we can write x l x k A x l x k A = (x l x k ) T A(x l x k ) (x l x k ) T A(x l x k ) = α k p T ka(α k+ p k + + α l p l ) + α kp T kap k = α k q T k(α k+ p k + + α l p l ) + α kq T kp k >, where the last inequality follows from Theorem. (a), (c). 3. Backward error analysis. For many physical problems requiring numerical solution, we are given inexact or uncertain input data (in this case A and/or b). It is not justifiable to seek a solution beyond the accuracy of the data [6]. Instead, it is more reasonable to stop an iterative solver once we know that the current approximate solution solves a nearby problem. The measure of nearby should match the error in the input data. The design of such stopping rules is an important application of backward error analysis. For a consistent linear system Ax = b, we think of x k coming from the kth iteration of one of the iterative solvers. Following Titley-Péloquin [5] we say that x k is an acceptable solution if and only if there exist perturbations E and f satisfying (A + E)x k = b + f, E A α, f b β (3.) for some tolerances α, β that reflect the (preferably known) accuracy of the data. We are naturally interested in minimizing the size of E and f. If we define the optimization problem min ξ s.t. (A + E)x k = b + f, ξ,e,f E A αξ, f b βξ

6 DAVID FONG AND MICHAEL SAUNDERS to have optimal solution ξ k, E k, f k (all functions of x k, α, and β), we see that x k is an acceptable solution if and only if ξ k. We call ξ k the normwise relative backward error (NRBE) for x k. With r k = b Ax k, the optimal solution ξ k, E k, f k is shown in [5] to be β b φ k = α A x k + β b, E k = ( φ k) x k r kx T k, (3.) r k ξ k = α A x k + β b, f k = φ k r k. (3.3) (See [, p] for the case β = and [, 7. and p336] for the case α = β.) 3.. Stopping rule. For general tolerances α and β, the condition ξ k for x k to be an acceptable solution becomes r k α A x k + β b, (3.4) the stopping rule used in LSQR for consistent systems [9, p54, rule S]. 3.. Monotonic backward errors. Of interest is the size of the perturbations to A and b for which x k is an exact solution of Ax = b. From (3.) (3.3), the perturbations have the following norms: E k = ( φ k ) r k x k = f k = φ k r k = α A r k α A x k + β b, (3.5) β b r k α A x k + β b. (3.6) Since x k is monotonically increasing for and, we see from (3.) that φ k is monotonically decreasing for both solvers. Since r k is monotonically decreasing for (but not for ), we have the following result. Theorem 3.. Suppose α > and β > in (3.). For CR and (but not ), the relative backward errors E k / A and f k / b decrease monotonically. Proof. This follows from (3.5) (3.6) with x k increasing for both solvers and r k decreasing for CR and but not for. 4. Numerical results. Here we compare the convergence of and on various spd systems Ax = b and some associated indefinite systems (A δi)x = b. The test examples are drawn from the University of Florida Sparse Matrix Collection (Davis [5]). We experimented with all 6 cases for which A is real spd and b is supplied. In Matlab we computed the condition number for each test matrix by finding the largest and smallest eigenvalue using eigs(a,, LM ) and eigs(a,, SM ) respectively. For this test set, the condition numbers range from.7e+3 to 3.E+3. Since A is spd, we apply diagonal preconditioning by redefining A and b as follows: d = diag(a), D = diag(./sqrt(d)), A DAD, b Db, b b/ b. Thus in the figures below we have diag(a) = I and b =. With this preconditioning, the condition numbers range from.e+ to.e+. The distribution of condition number of the test set matrices before and after preconditioning is shown in Figure 4.. The stopping rule used for and was (3.4) with α = and β =. That is, r k b = (but with a maximum of 5n iterations for spd systems and n iterations for indefinite systems).

VERSUS 7 4 vs test set Cond(A) 8 6 4 original preconditioned 5 5 5 3 Matrix ID Fig. 4.. Distribution of condition number for matrices used for vs comparison, before and after diagonal preconditioning 4.. Positive-definite systems. In plotting backward errors, we assume for simplicity that α > and β = in (3.) (3.3), even though it doesn t match the choice of α and β in the stopping rule (3.4). This gives φ k = and E k = r k / x k in (3.5). Thus, as in Theorem 3., we expect E k to decrease monotonically for CR and but not for. In Figures 4. and 4.3, we plot r k / x k, x x k A, and x x k for and for four different problems. For, the plots confirm that x x k A and x x k are monotonic. For, the plots confirm the prediction of Theorems 3.,.5, and.4 that r k / x k, x x k A, and x x k are monotonic. Figure 4. (left) shows problem Schenk AFE af shell8 with A of size 54855 54855 and cond(a) =.7E+5. From the plot of backward errors r k / x k, we see that both and converge quickly at the early iterations. Then the backward error of plateaus at about iteration 8, and the backward error of stays about order of magnitude behind. A similar phenomenon of fast convergence at early iterations followed by slow convergence is observed in the energy norm error and -norm error plots. Figure 4. (right) shows problem Cannizzo sts498 with A of size 498 498 and cond(a) = 6.7E+3. converges slightly faster in terms of backward error, while converges slightly faster in terms of both error norms.

8 DAVID FONG AND MICHAEL SAUNDERS Name:Schenk_AFE_af_shell8, Dim:54855x54855, nnz:757955, id= Name:Cannizzo_sts498, Dim:498x498, nnz:7356, id=3 log( r / x ) log( r / x ) 9 9 4 6 8 4 5 5 5 3 35 4 45 Schenk_AFE_af_shell8, Dim:54855x54855, nnz:757955, id= Cannizzo_sts498, Dim:498x498, nnz:7356, id=3 log x k x * A log x k x * A 4 6 8 4 5 5 5 3 35 4 45 Schenk_AFE_af_shell8, Dim:54855x54855, nnz:757955, id= Cannizzo_sts498, Dim:498x498, nnz:7356, id=3 log x k x * log x k x * 4 6 8 4 5 5 5 3 35 4 45 Fig. 4.. Comparison of backward and forward errors for and solving two spd systems Ax = b. Left: Problem Schenk AFE af shell8 with n = 54855 and cond(a) =.7E+5. Note that stops significantly sooner than with α = and β = in (3.4). Right: Cannizzo sts498 with n = 498 and cond(a) = 6.7E+3. stops slightly sooner than. Top: The values of log ( r k / x k ) are plotted against iteration number k. These values define log ( E k ) when the stopping tolerances in (3.4) are α > and β =. Middle: The values of log x x k A are plotted against iteration number k. This is the quantity that minimizes at each iteration. Bottom: The values of log x x k.

VERSUS 9 Name:Simon_raefsky4, Dim:9779x9779, nnz:36789, id=7 Name:BenElechi_BenElechi, Dim:45874x45874, nnz:35496, id= log( r / x ) log( r / x ) 4 6 9 8 3 4 5 6 7 8 9 x 4.5.5.5 3 3.5 x 4 5 Simon_raefsky4, Dim:9779x9779, nnz:36789, id=7 BenElechi_BenElechi, Dim:45874x45874, nnz:35496, id= 4 3 log x k x * A log x k x * A 3 4 5 6 7 8 9 x 4.5.5.5 3 3.5 x 4 Simon_raefsky4, Dim:9779x9779, nnz:36789, id=7 BenElechi_BenElechi, Dim:45874x45874, nnz:35496, id= 9 8 log x k x * 7 log x k x * 6 5 4 3 4 5 6 7 8 9 x 4.5.5.5 3 3.5 x 4 Fig. 4.3. Comparison of backward and forward errors for and solving two more spd systems Ax = b. Left: Problem Simon raefsky4, with n = 9779 and cond(a) =.E+. Right: BenElechi BenElechi, with n = 45874 and cond(a) =.8E+9. Top: The values of log ( r k / x k ) are plotted against iteration number k. These values define log ( E k ) when the stopping tolerances in (3.4) are α > and β =. Middle: The values of log x x k A are plotted against iteration number k. This is the quantity that minimizes at each iteration. Bottom: The values of log x x k.

DAVID FONG AND MICHAEL SAUNDERS Figure 4.3 (left) shows problem Simon raefsky4 with A of size 9779 9779 and cond(a) =.E+. Because of the high condition number, both algorithms hit the 5n iteration limit. We see that the backward error for converges faster than for as expected. For the energy norm error, is able to decrease over 5 orders of magnitude while plateaus after a orders of magnitude decrease. For both the energy norm error and -norm error, reaches a lower point than for some iterations. This must be due to numerical error in and (a result of loss of orthogonality in V k ). Figure 4.3 (right) shows problem BenElechi BenElechi with A of size 45874 45874 and cond(a) =.8E+9. The backward error of stays ahead of by orders of magnitude for most iterations. Around iteration 3, the backward error of both algorithms goes down rapidly and catches up with. Both algorithms exhibit a plateau on energy norm error for the first iterations. The error norms for start decreasing around iteration and decrease even faster after iteration 3. Figure 4.4 shows r k and x k for and on two typical spd examples. We see that x k is monotonically increasing for both solvers, and the x k values rise fairly rapidly to their limiting value x, with a moderate delay for. Figure 4.5 shows r k and x k for and on two spd examples in which the residual decrease and the solution norm increase are somewhat slower than typical. The rise of x k for is rather more delayed. In the second case, if the stopping tolerance were β = rather than β =, the final x k (k ) would be less than half the exact value x. It will be of future interest to evaluate this effect within the context of trust-region methods for optimization. 4... Why does r k for lag behind?. It is commonly thought that even though is known to minimize r k at each iteration, the cumulative minimum of r k for should approximately match that of. That is, min i k rc i rk M. However, in Figures 4. and 4.3 we see that r k for is often smaller than for by or orders of magnitude. This phenomenon can be explained by the following relations between rk C and rm k [, Lemma 5.4.] and [6]: r C k = r M k r Mk / r Mk. (4.) From (4.), one can infer that if rk M decreases a lot between iterations k and k, then rk C would be roughly the same as rm k. The converse also holds, in that rk C will be much larger than rm k if is almost stalling at iteration k (i.e., rk M did not decrease much relative to the previous iteration). The above analysis was pointed out by Titley-Peloquin [6] in comparing LSQR and LSMR [8]. We repeat the analysis here for vs and extend it to demonstrate why there is a lag in general for large problems. With α = in stopping rule (3.4), and stop when r k β b. If this occurs at iteration l, we have l k= r k r k = r l b β.

VERSUS Name:Simon_olafu, Dim:646x646, nnz:556, id=6 Name:Cannizzo_sts498, Dim:498x498, nnz:7356, id=3 log r log r.5.5.5 3 x 4 8 x 5 Name:Simon_olafu, Dim:646x646, nnz:556, id=6 9 5 5 5 3 35 4 45 5 Name:Cannizzo_sts498, Dim:498x498, nnz:7356, id=3 6 4 5 x x 8 6 4 5.5.5.5 3 x 4 5 5 5 3 35 4 45 Fig. 4.4. Comparison of residual and solution norms for and solving two spd systems Ax = b. These are typical examples. Left: Problem Simon olafu with n = 646 and cond(a) = 4.3E+8. Right: Problem Cannizzo sts498 with n = 498 and cond(a) = 6.7E+3. Top: The values of log r k are plotted against iteration number k. Bottom: The values of x k are plotted against k. The solution norms grow somewhat faster for than for. Both reach the limiting value x significantly before x k is close to x. Thus on average, rk M / rm k will be closer to if l is large. This means that the larger l is (in absolute terms), the more will lag behind (a bigger gap between rk C and rm k ). 4.. Indefinite systems. A key part of Steihaug s trust-region method for large-scale unconstrained optimization [] (see also [4]) is his proof that when is applied to a symmetric (possibly indefinite) system Ax = b, the solution norms x,..., x k are strictly increasing as long as p T j Ap j > for all iterations j k. (We are using the notation in Table..)

DAVID FONG AND MICHAEL SAUNDERS Name:Schmid_thermal, Dim:8654x8654, nnz:574458, id=4 Name:BenElechi_BenElechi, Dim:45874x45874, nnz:35496, id= log r log r 9 4 6 8 4 5 Name:Schmid_thermal, Dim:8654x8654, nnz:574458, id=4 9.5.5.5 3 3.5 x 4 Name:BenElechi_BenElechi, Dim:45874x45874, nnz:35496, id= 8 7 6 5 5 x x 4 3 5 4 6 8 4.5.5.5 3 3.5 x 4 Fig. 4.5. Comparison of residual and solution norms for and solving two more spd systems Ax = b. Sometimes the solution norms take longer to reach the limiting value x. Left: Problem Schmid thermal with n = 8654 and cond(a) = 3.E+5. Right: Problem BenElechi BenElechi with n = 45874 and cond(a) =.8E+9. Top: The values of log r k are plotted against iteration number k. Bottom: The values of x k are plotted against k. Again the solution norms grow faster for. From our proof of Theorem., we see that the same property holds for CR and as long as both p T j Ap j > and r T j Ar j > for all iterations j k. In case future research finds that is a useful solver in the trust-region context, it is of interest now to offer some empirical results about the behavior of x k when is applied to indefinite systems.

VERSUS 3.6 3 x k.4..8.6.4 r k / x k.5.5.5..5.5 3 Iteration number.5.5 3 Iteration number Fig. 4.6. For on the indefinite problem (4.), x k and the backward error r k / x k are both slightly non-monotonic. First, on the nonsingular indefinite system x =, (4.) gives non-monotonic solution norms, as shown in the left plot of Figure 4.6. The decrease in x k implies that the backward errors r k / x k may not be monotonic, as illustrated in the right plot. More generally, we can gain an impression of the behavior of x k by recalling from Choi et al. [3] the connection between and -QLP. Both methods compute the iterates x M k = V kyk M in (.) from the subproblems y M k = arg min y R k T k y β e and possibly T l y M l = β e. When A is nonsingular or Ax = b is consistent (which we now assume), yk M is uniquely defined for each k l and the methods compute the same iterates x M k (but by different numerical methods). In fact they both compute the expanding QR factorizations [ ] [ ] Rk t Q k Tk β e = k, φ k (with R k upper tridiagonal) and -QLP also computes the orthogonal factorizations R k P k = L k (with L k lower tridiagonal), from which the kth solution estimate is defined by W k = V k P k, L k u k = t k, and x M k = W ku k. As shown in [3, 5.3], the construction of these quantities is such that the first k 3 columns of W k are the same as in W k, and the first k 3 elements of u k are the same as in u k. Since W k has orthonormal columns, x M k = u k, where the first k elements of u k are unaltered by later iterations. As shown in [3, 6.5], it means that certain quantities can be cheaply updated to give norm estimates in the form χ χ + ˆµ k, x M k = χ + µ k + µ k, where it is clear that χ increases monotonically. Although the last two terms are of unpredictable size, x M k tends to be dominated by the monotonic term χ and we can expect that x M k will be approximately monotonic as k increases from to l.

4 DAVID FONG AND MICHAEL SAUNDERS Simon_olafu, Dim:646x646, nnz:556, id=3 Cannizzo_sts498, Dim:498x498, nnz:7356, id=39.5.5.5 log r log r.5.5.5 4 6 8 4 6 8 9 Simon_olafu, Dim:646x646, nnz:556, id=3.5 5 5 5 3 35 4 45 5 Cannizzo_sts498, Dim:498x498, nnz:7356, id=39 8 7 6 5 x x 4 3 5 4 6 8 4 6 8 5 5 5 3 35 4 45 Fig. 4.7. Residual norms and solution norms when is applied to two indefinite systems (A δi)x = b, where A is the spd matrices used in Figure 4.4 and δ =.5 is large enough to make the systems indefinite. Left: Problem Simon olafu with n = 646. Right: Problem Cannizzo sts498 with n = 498. Top: The values of log r k are plotted against iteration number k for the first n iterations. Bottom left: The values of x k are plotted against k. During the n = 646 iterations, x k increased 83% of the time and the backward errors r k / x k (not shown) decreased 96% of the time. Bottom right: During the n = 498 iterations, x k increased 9% of the time and the backward errors r k / x k (not shown) decreased 98% of the time. Experimentally we find that for most iterations on an indefinite problem, x k does increase. To obtain indefinite examples that were sensibly scaled, we used the four spd (A, b) cases in Figures 4.4 4.5, applied diagonal scaling as before, and solved (A δi)x = b with δ =.5 and where A and b are now scaled (so that diag(a) = I). The number of iterations increased significantly but was limited to n. Figure 4.7 shows log r k and x k for the first two cases (where A is the spd matrices in Figure 4.4). The values of x k are essentially monotonic. The backward errors r k / x k (not shown) were even closer to being monotonic (at least for the first n iterations).

VERSUS 5 Schmid_thermal, Dim:8654x8654, nnz:574458, id=4 BenElechi_BenElechi, Dim:45874x45874, nnz:35496, id=48.5.5.5 log r log r.5.5.5.5 3 4 5 6 7 8 9 x 4.5.5.5 x 5 4 Schmid_thermal, Dim:8654x8654, nnz:574458, id=4 4 BenElechi_BenElechi, Dim:45874x45874, nnz:35496, id=48 35 35 3 3 5 5 x x 5 5 5 5 3 4 5 6 7 8 9 x 4.5.5.5 x 5 Fig. 4.8. Residual norms and solution norms when is applied to two indefinite systems (A δi)x = b, where A is the spd matrices used in Figure 4.5 and δ =.5 is large enough to make the systems indefinite. Left: Problem Schmid thermal with n = 8654. Right: Problem BenElechi BenElechi with n = 45874. Top: The values of log r k are plotted against iteration number k for the first n iterations. Bottom left: The values of x k are plotted against k. There is a mild but clear decrease in x k over an interval of about iterations. During the n = 8654 iterations, x k increased 83% of the time and the backward errors r k / x k (not shown) decreased 9% of the time. Bottom right: The solution norms and backward errors are essentially monotonic. During the n = 45874 iterations, x k increased 88% of the time and the backward errors r k / x k (not shown) decreased 95% of the time. Figure 4.8 shows x k and log r k for the second two cases (where A is the spd matrices in Figure 4.5). The left example reveals a definite period of decrease in x k. Nevertheless, during the n = 8654 iterations, x k increased 83% of the time and the backward errors r k / x k decreased 9% of the time. The right example is more like those in Figure 4.7. During n = 45874 iterations, x k increased 83% of the time, the backward errors r k / x k decreased 9% of the time, and any nonmonotonicity was very slight.

6 DAVID FONG AND MICHAEL SAUNDERS Table 5. Comparison of and properties on an spd system Ax = b. x k [, Thm.] (Thm.3) x x k [, Thm 4:3] (Thm.4) [, Thm 7:5] x x k A [, Thm 6:3] (Thm.5) [, Thm 7:4] r k not-monotonic [8] [, Thm 7:] r k / x k not-monotonic (Thm 3.) monotonically increasing monotonically decreasing Table 5. Comparison of LSQR and LSMR properties on min Ax b. LSQR LSMR x k [7, Thm 3.3.] [7, Thm 3.3.6] x x k [7, Thm 3.3.] [7, Thm 3.3.7] r r k [7, Thm 3.3.3] [7, Thm 3.3.8] A T r k not-monotonic [8, 3.] r k [9, 5.] [7, Thm 3.3.] ( A T r k / r k ) LSQR ( A T r k / r k ) LSMR x k converges to minimum-norm x for singular systems 5. Conclusions. For full-rank least-squares problems min Ax b, the solvers LSQR [9, ] and LSMR [8, 4] are equivalent to and on the (spd) normal equation A T Ax = A T b. Comparisons in [8] indicated that LSMR can often stop much sooner than LSQR when the stopping rule is based on Stewart s backward error norm A T r k / r k for least-squares problems []. Our theoretical and experimental results here provide analogous evidence that can often stop much sooner than on spd systems when the stopping rule is based on the backward error r k / x k for Ax = b (or the more general backward errors in Theorem 3.). In some cases, can converge faster than by as much as orders of magnitude (Figure 4.3). On the other hand, converges somewhat faster than in terms of both x x k A and x x k (same figure). For spd systems, Table 5. summarizes properties that were already known by Hestenes and Stiefel [] and Steihaug [], along with the two additional properties of that we proved here (Theorems.3 and 3.). These theorems and experiments on and are part of the first author s PhD thesis [7], which also discusses LSQR and LSMR and derives some new results for both solvers. Table 5. summarizes the known results for LSQR and LSMR (in [9] and [8] respectively) and the newly derived properties for both solvers (in [7]). Acknowledgements. We kindly thank Professor Mehiddin Al-Baali and other colleagues for organizing the Second International Conference on Numerical Analysis and Optimization at Sultan Qaboos University (January 3 6,, Muscat, Sultanate of Oman). Their wish to publish some of the conference papers in a special issue of SQU Journal for Science gave added motivation for this research. We also thank Dr Sou-Cheng Choi for many helpful discussions of the iterative solvers, and Michael Friedlander for a final welcome tip.

VERSUS 7 REFERENCES [] M. Arioli, A stopping criterion for the conjugate gradient algorithm in a finite element method framework, Numer. Math., 97 (4), pp. 4. [] S.-C. Choi, Iterative Methods for Singular Linear Equations and Least-Squares Problems, PhD thesis, Stanford University, Stanford, CA, December 6. [3] S.-C. Choi, C. C. Paige, and M. A. Saunders, -QLP: a Krylov subspace method for indefinite or singular symmetric systems, SIAM J. Sci. Comput., 33 (), pp. 8 836. [4] A. R. Conn, N. I. M. Gould, and Ph. L. Toint, Trust-Region Methods, vol., SIAM, Philadelphia,. [5] T. A. Davis, University of Florida Sparse Matrix Collection. http://www.cise.ufl.edu/ research/sparse/matrices. [6] J. J. Dongarra, J. R. Bunch, C. B. Moler, and G. W. Stewart, LINPACK Users Guide, SIAM, Philadelphia, 979. [7] D. C.-L. Fong, Minimum-Residual Methods for Sparse Least-Squares Using Golub-Kahan Bidiagonalization, PhD thesis, Stanford University, Stanford, CA, December. [8] D. C.-L. Fong and M. A. Saunders, LSMR: An iterative algorithm for sparse least-squares problems, SIAM J. Sci. Comput., 33 (), pp. 95 97. [9] R. W. Freund, G. H. Golub, and N. M. Nachtigal, Iterative solution of linear systems, Acta Numerica, (99), pp. 57. [] A. Greenbaum, Iterative Methods for Solving Linear Systems, SIAM, Philadelphia, 997. [] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (95), pp. 49 436. [] N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, Philadelphia, second ed.,. [3] C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, J. Res. Nat. Bur. Standards, 45 (95), pp. 55 8. [4] LSMR software for linear systems and least squares. http://www.stanford.edu/group/sol/ software.html. [5] D. G. Luenberger, Hyperbolic pairs in the method of conjugate gradients, SIAM J. Appl. Math., 7 (969), pp. 63 67. [6], The conjugate residual method for constrained minimization problems, SIAM J. Numer. Anal., 7 (97), pp. 39 398. [7] G. A. Meurant, The Lanczos and Conjugate Gradient Algorithms: From Theory to Finite Precision Computations, vol. 9 of Software, Environments, and Tools, SIAM, Philadelphia, 6. [8] C. C. Paige and M. A. Saunders, Solution of sparse indefinite systems of linear equations, SIAM J. Numer. Anal., (975), pp. 67 69. [9], LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Softw., 8 (98), pp. 43 7. [], Algorithm 583; LSQR: Sparse linear equations and least-squares problems, ACM Trans. Math. Softw., 8 (98), pp. 95 9. [] T. Steihaug, The conjugate gradient method and trust regions in large scale optimization, SIAM J. Numer. Anal., (983), pp. 66 637. [] G. W. Stewart, Research, development and LINPACK, in Mathematical Software III, J. R. Rice, ed., Academic Press, New York, 977, pp. 4. [3] E. Stiefel, Relaxationsmethoden bester strategie zur lösung linearer gleichungssysteme, Comm. Math. Helv., 9 (955), pp. 57 79. [4] Y. Sun, The Filter Algorithm for Solving Large-Scale Eigenproblems from Accelerator Simulations, PhD thesis, Stanford University, Stanford, CA, March 3. [5] D. Titley-Peloquin, Backward Perturbation Analysis of Least Squares Problems, PhD thesis, School of Computer Science, McGill University,. [6], Convergence of backward error bounds in LSQR and LSMR. Private communication,. [7] H. A. van der Vorst, Iterative Krylov Methods for Large Linear Systems, Cambridge University Press, Cambridge, first ed., 3. [8] D. S. Watkins, Fundamentals of Matrix Computations, Pure and Applied Mathematics, Wiley, Hoboken, NJ, third ed.,.