Generalized MINRES or Generalized LSQR?

Size: px
Start display at page:

Download "Generalized MINRES or Generalized LSQR?"

Transcription

1 Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical Analysis and Scientific Computing on the occasion of Lothar Reichel s 60th birthday and the 20th anniversary of ETNA Department of Mathematical Sciences Kent State University GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

2 Motivation The Golub-Kahan orthogonal bidiagonalization of A R m n gives us freedom to choose 1 starting vector b R m and solve sparse systems Ax b (as in LSQR) But orthogonal tridiagonalization gives us freedom to choose 2 starting vectors b R m and c R n and solve two sparse systems systems Ax b and A T y c (as in USYMQR GMINRES) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

3 Motivation The Golub-Kahan orthogonal bidiagonalization of A R m n gives us freedom to choose 1 starting vector b R m and solve sparse systems Ax b (as in LSQR) But orthogonal tridiagonalization gives us freedom to choose 2 starting vectors b R m and c R n and solve two sparse systems systems Ax b and A T y c (as in USYMQR GMINRES) Reichel and Ye (2008) chose c to speed up the computation of x Golub, Stoll and Wathen (2008) wanted c T x = b T y GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

4 Abstract Given a general matrix A, we can construct orthogonal matrices U, V that reduce A to tridiagonal form: U T AV = T. We can also arrange that the first columns of U and V are proportional to given vectors b and c. For square A, an iterative form of this orthogonal tridiagonalization was given by Saunders, Simon, and Yip (SINUM 1988) and used to solve square systems Ax = b and A T y = c simultaneously. (One of the resulting solvers becomes MINRES when A is symmetric and b = c.) The approach was rediscovered by Reichel and Ye (NLAA 2008) with emphasis on rectangular A and least-squares problems Ax b. The resulting solver was regarded as a generalization of LSQR (although it doesn t become LSQR in any special case). Careful choice of c was shown to improve convergence. In his last year of life, Gene Golub became interested in GLSQR for estimating c T x = b T y without computing x or y (Golub, Stoll, and Wathen (ETNA 2008)). We review the tridiagonalization process and Gene et al. s insight into its true identity. GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

5 Orthogonal matrix reductions Direct: V = product of Householder transformations n n Iterative: V k = ( v 1 v 2... v k ) n k Mostly short-term recurrences GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

6 Tridiagonalization of symmetric A Direct: ( 1 V T ) ( ) ( 0 b T 1 b A 0 x ) x x x = V x x x x x x x x GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

7 Tridiagonalization of symmetric A Direct: ( 1 V T ) ( ) ( 0 b T 1 b A 0 x ) x x x = V x x x x x x x x Iterative: Lanczos process ( b AVk ) = Vk+1 ( βe1 T k+1,k ) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

8 Bidiagonalization of rectangular A Direct: U T ( b A ) ( 1 V ) = x x x x x x x x x GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

9 Bidiagonalization of rectangular A Direct: U T ( b A ) ( 1 V ) = x x x x x x x x x Iterative: Golub-Kahan process ( b AVk ) = Uk+1 ( βe1 B k+1,k ) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

10 Tridiagonalization of rectangular A Direct: 0 x x x x ( ) ( ) ( ) 1 0 c T 1 x x x U T = b A V x x x x x x GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

11 Tridiagonalization of rectangular A Direct: 0 x x x x ( ) ( ) ( ) 1 0 c T 1 x x x U T = b A V x x x x x x Iterative: S-Simon-Yip (1988), Reichel-Ye (2008) ( b AVk ) = Uk+1 ( βe1 T k+1,k ) ( c A T U k ) = Vk+1 ( γe1 T T k,k+1) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

12 MINRES-type solvers based on Lanczos, Arnoldi, Golub-Kahan, orth-tridiag GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

13 MINRES-type solvers for Ax b A Process Solver symmetric Lanczos Paige-S 1975 MINRES Choi-Paige-S 2011 MINRES-QLP rectangular Golub-Kahan Paige-S 1982 LSQR Fong-S 2011 LSMR unsymmetric Arnoldi Saad-Schultz 1986 GMRES unsymmetric orth-tridiag S-Simon-Yip 1988 USYMQR rectangular orth-tridiag Reichel-Ye 2008 GLSQR GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

14 MINRES-type solvers for Ax b A Process Solver symmetric Lanczos Paige-S 1975 MINRES Choi-Paige-S 2011 MINRES-QLP rectangular Golub-Kahan Paige-S 1982 LSQR Fong-S 2011 LSMR square Arnoldi Saad-Schultz 1986 GMRES square orth-tridiag S-Simon-Yip 1988 GMINRES rectangular orth-tridiag Reichel-Ye 2008 GLSQR GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

15 MINRES-type solvers for Ax b A Process Solver symmetric Lanczos Paige-S 1975 MINRES Choi-Paige-S 2011 MINRES-QLP rectangular Golub-Kahan Paige-S 1982 LSQR Fong-S 2011 LSMR square Arnoldi Saad-Schultz 1986 GMRES square orth-tridiag S-Simon-Yip 1988 GMINRES rectangular orth-tridiag Reichel-Ye 2008 GLSQR All these processes produce similar outputs: ( ) ( ) Lanczos b AVk = Vk+1 βe1 T k+1,k ( ) ( ) Golub-Kahan b AVk = Uk+1 βe1 B k+1,k ( ) ( ) orth-tridiag b AVk = Uk+1 βe1 T k+1,k ( and c A T ) ( U k = Vk+1 γe1 Tk,k+1) T GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

16 MINRES-type solvers for Ax b A Process Solver symmetric Lanczos Paige-S 1975 MINRES Choi-Paige-S 2011 MINRES-QLP rectangular Golub-Kahan Paige-S 1982 LSQR Fong-S 2011 LSMR square Arnoldi Saad-Schultz 1986 GMRES square orth-tridiag S-Simon-Yip 1988 GMINRES rectangular orth-tridiag Reichel-Ye 2008 GLSQR All methods: ( b AVk ) = Uk+1 ( βe1 H k ) b AV k w k = U k+1 (βe 1 H k w k ) b AV k w k U k+1 βe 1 H k w k GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

17 MINRES-type solvers for Ax b A Process Solver symmetric Lanczos Paige-S 1975 MINRES Choi-Paige-S 2011 MINRES-QLP rectangular Golub-Kahan Paige-S 1982 LSQR Fong-S 2011 LSMR square Arnoldi Saad-Schultz 1986 GMRES square orth-tridiag S-Simon-Yip 1988 GMINRES rectangular orth-tridiag Reichel-Ye 2008 GLSQR All methods: ( b AVk ) = Uk+1 ( βe1 H k ) b AV k w k = U k+1 (βe 1 H k w k ) b AV k w k U k+1 βe 1 H k w k x k = V k w k where we choose w k from min βe 1 H k w k GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

18 Symmetric methods for unsymmetric Ax b Lanczos on ( ) ( ) I A r A T δ 2 = I x ( ) b gives Golub-Kahan 0 CG-type subproblem gives LSQR MINRES-type subproblem gives LSMR GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

19 Symmetric methods for unsymmetric Ax b Lanczos on ( ) ( ) I A r A T δ 2 = I x ( ) b gives Golub-Kahan 0 CG-type subproblem gives LSQR MINRES-type subproblem gives LSMR ( ) ( ) ( ) A y b Lanczos on A T = (square A) x c is not equivalent to orthogonal tridiagonalization (but seems worth a try!) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

20 Tridiagonalization of general A using orthogonal matrices Some history GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

21 Orthogonal tridiagonalization 1988 Saunders, Simon, and Yip, SINUM 25 Two CG-type methods for unsymmetric linear equations Focus on square A USYMLQ and USYMQR (GSYMMLQ and GMINRES) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

22 Orthogonal tridiagonalization 1988 Saunders, Simon, and Yip, SINUM 25 Two CG-type methods for unsymmetric linear equations Focus on square A USYMLQ and USYMQR (GSYMMLQ and GMINRES) 2008 Reichel and Ye A generalized LSQR algorithm Focus on rectangular A GLSQR GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

23 Orthogonal tridiagonalization 1988 Saunders, Simon, and Yip, SINUM 25 Two CG-type methods for unsymmetric linear equations Focus on square A USYMLQ and USYMQR (GSYMMLQ and GMINRES) 2008 Reichel and Ye A generalized LSQR algorithm Focus on rectangular A GLSQR 2008 Golub, Stoll, and Wathen Approximation of the scattering amplitude Focus on Ax = b, A T y = c and estimation of c T x = b T y without x, y GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

24 Orthogonal tridiagonalization 1988 Saunders, Simon, and Yip, SINUM 25 Two CG-type methods for unsymmetric linear equations Focus on square A USYMLQ and USYMQR (GSYMMLQ and GMINRES) 2008 Reichel and Ye A generalized LSQR algorithm Focus on rectangular A GLSQR 2008 Golub, Stoll, and Wathen Approximation of the scattering amplitude Focus on Ax = b, A T y = c and estimation of c T x = b T y without x, y 2012 Patrick Küschner, Max Planck Institute, Magdeburg Eigenvalues Need to solve Ax = b and A T y = c GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

25 Original motivation (S 1981) CG, SYMMLQ, MINRES work well for symmetric Ax = b GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

26 Original motivation (S 1981) CG, SYMMLQ, MINRES work well for symmetric Ax = b Tridiagonalization of unsymmetric A is no more than twice the work and storage per iteration GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

27 Original motivation (S 1981) CG, SYMMLQ, MINRES work well for symmetric Ax = b Tridiagonalization of unsymmetric A is no more than twice the work and storage per iteration If A is symmetric, we get Lanczos and MINRES etc GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

28 Original motivation (S 1981) CG, SYMMLQ, MINRES work well for symmetric Ax = b Tridiagonalization of unsymmetric A is no more than twice the work and storage per iteration If A is symmetric, we get Lanczos and MINRES etc If A is nearly symmetric, total itns should be not much more (??) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

29 Elizabeth Yip s SIAM conference abstract (1982) CG method for unsymmetric matrices applied to PDE problems We present a CG-type method to solve Ax = b, where A is an arbitrary nonsingular unsymmetric matrix. The algorithm is equivalent to an orthogonal tridiagonalization of A. GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

30 Elizabeth Yip s SIAM conference abstract (1982) CG method for unsymmetric matrices applied to PDE problems We present a CG-type method to solve Ax = b, where A is an arbitrary nonsingular unsymmetric matrix. The algorithm is equivalent to an orthogonal tridiagonalization of A. Each iteration takes more work than the orthogonal bidiagonalization proposed by Golub-Kahan, Paige-Saunders for sparse least squares problems (LSQR). GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

31 Elizabeth Yip s SIAM conference abstract (1982) CG method for unsymmetric matrices applied to PDE problems We present a CG-type method to solve Ax = b, where A is an arbitrary nonsingular unsymmetric matrix. The algorithm is equivalent to an orthogonal tridiagonalization of A. Each iteration takes more work than the orthogonal bidiagonalization proposed by Golub-Kahan, Paige-Saunders for sparse least squares problems (LSQR). We apply a preconditioned version (Fast Poisson) to the difference equation of unsteady transonic flow with small disturbances. (Compared with ORTHOMIN(5)) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

32 Numerical results with orthogonal tridiagonalization GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

33 Numerical results (SSY 1988) B I I B I A = B = tridiag ( 1 δ 4 1+δ ) I B I I B GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

34 Numerical results (SSY 1988) B I I B I A = B = tridiag ( 1 δ 4 1+δ ) I B I I B Megaflops to reach r 10 6 b : δ ORTHOMIN(5) LSQR GMINRES GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

35 Numerical results (SSY 1988) B I I B I A = B = tridiag ( 1 δ 4 1+δ ) I B I I B Megaflops to reach r 10 6 b : δ ORTHOMIN(5) LSQR GMINRES Bottom line: ORTHOMIN sometimes good, can fail. LSQR always better than GMINRES GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

36 Numerical results (Reichel and Ye 2008) Focused on rectangular A and least-squares (Forgot about SSY 1988 and USYMQR hence GLSQR) Three numerical examples (all square!) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

37 Numerical results (Reichel and Ye 2008) Focused on rectangular A and least-squares (Forgot about SSY 1988 and USYMQR hence GLSQR) Three numerical examples (all square!) Remember x 1 v 1 c (since x k = V k w k and c = γv 1 ) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

38 Numerical results (Reichel and Ye 2008) Focused on rectangular A and least-squares (Forgot about SSY 1988 and USYMQR hence GLSQR) Three numerical examples (all square!) Remember x 1 v 1 c (since x k = V k w k and c = γv 1 ) Focused on choice of c stopping early looking at x k = ( x k1 x k2... x kn ) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

39 Numerical results (Reichel and Ye 2008) Example 1 (Fredholm equation) π 0 Discretize to get Aˆx = ˆb, n = 400 κ(s, t)x(t)dt = b(s), 0 s π 2 Solve Ax = b, b ˆb = 10 3 ˆb GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

40 Numerical results (Reichel and Ye 2008) Example 1 (Fredholm equation) π 0 Discretize to get Aˆx = ˆb, n = 400 κ(s, t)x(t)dt = b(s), 0 s π 2 Among {x LSQR k }, x LSQR 3 is closest to ˆx Solve Ax = b, b ˆb = 10 3 ˆb 101 true x 99 x LSQR 3 GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

41 Numerical results (Reichel and Ye 2008) Example 1 (Fredholm equation) π 0 Discretize to get Aˆx = ˆb, n = 400 κ(s, t)x(t)dt = b(s), 0 s π 2 Among {x LSQR k }, x LSQR 3 is closest to ˆx Solve Ax = b, b ˆb = 10 3 ˆb GLSQR: choose c = ( ) T because true x 100c 101 x GLSQR 1 true x 99 x LSQR 3 GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

42 Numerical results (Reichel and Ye 2008) Example 2 (Star cluster) 470 stars, ˆx = pixels, ˆb = Aˆx, n = Solve Ax = b, b ˆb = 10 2 ˆb GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

43 Numerical results (Reichel and Ye 2008) Example 2 (Star cluster) 470 stars, ˆx = pixels, ˆb = Aˆx, n = Solve Ax = b, b ˆb = 10 2 ˆb Choose c = b (because b x) Compare error in x LSQR k and x GLSQR k x k ˆx / ˆx 0.9 for 40 iterations LSQR GLSQR k GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

44 Numerical results (Reichel and Ye 2008) Example 3 (Fredholm equation) 1 0 k(s, t)x(t)dt = exp(s) + (1 e)s 1, 0 s 1 k(s, t) = { s(t 1), t(s 1), s < t s t Discretize to get Aˆx = ˆb, n = 1024 Solve Ax = b, b ˆb = 10 3 ˆb x LSQR 22 has smallest error, but oscillates around ˆx GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

45 Numerical results (Reichel and Ye 2008) Example 3 (Fredholm equation) 1 0 k(s, t)x(t)dt = exp(s) + (1 e)s 1, 0 s 1 k(s, t) = { s(t 1), t(s 1), s < t s t Discretize to get Aˆx = ˆb, n = 1024 Solve Ax = b, b ˆb = 10 3 ˆb x LSQR 22 has smallest error, but oscillates around ˆx Discretize coarsely to get A c x c = b c, n = 4 Prolongate x c to get x prl R 1024 and starting vector c = x prl x GLSQR 4 is very close to ˆx GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

46 Conclusions GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

47 Subspaces Unsymmetric Lanczos generates two Krylov subspaces: U k span{b Ab A 2 b... A k 1 b} V k span{c A T c (A T ) 2 c... (A T ) k 1 c} GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

48 Subspaces Unsymmetric Lanczos generates two Krylov subspaces: U k span{b Ab A 2 b... A k 1 b} V k span{c A T c (A T ) 2 c... (A T ) k 1 c} Orthogonal tridiagonalization generates U 2k span{b AA T b... (AA T ) k 1 b Ac (AA T )Ac...} V 2k span{c A T Ac... (A T A) k 1 c A T b (A T A)A T b...} GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

49 Subspaces Unsymmetric Lanczos generates two Krylov subspaces: U k span{b Ab A 2 b... A k 1 b} V k span{c A T c (A T ) 2 c... (A T ) k 1 c} Orthogonal tridiagonalization generates U 2k span{b AA T b... (AA T ) k 1 b Ac (AA T )Ac...} V 2k span{c A T Ac... (A T A) k 1 c A T b (A T A)A T b...} Reichel and Ye 2008: Richer subspace for ill-posed Ax b (can choose c x) A can be rectangular Check for early termination of {u k } or {v k } sequence GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

50 Functionals c T x = b T y Lu and Darmofal (SISC 2003) use unsymmetric Lanczos with QMR to solve Ax = b and A T y = c simultaneously and to estimate c T x = b T y at a superconvergent rate: c T x k c T x b T y k b T y b Ax k c A T y k σ min (A) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

51 Functionals c T x = b T y Lu and Darmofal (SISC 2003) use unsymmetric Lanczos with QMR to solve Ax = b and A T y = c simultaneously and to estimate c T x = b T y at a superconvergent rate: c T x k c T x b T y k b T y b Ax k c A T y k σ min (A) Golub, Stoll and Wathen (2008) use orthogonal tridiagonalization with GLSQR to do likewise GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

52 Functionals c T x = b T y Lu and Darmofal (SISC 2003) use unsymmetric Lanczos with QMR to solve Ax = b and A T y = c simultaneously and to estimate c T x = b T y at a superconvergent rate: c T x k c T x b T y k b T y b Ax k c A T y k σ min (A) Golub, Stoll and Wathen (2008) use orthogonal tridiagonalization with GLSQR to do likewise Matrices, moments, and quadrature GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

53 Functionals c T x = b T y Lu and Darmofal (SISC 2003) use unsymmetric Lanczos with QMR to solve Ax = b and A T y = c simultaneously and to estimate c T x = b T y at a superconvergent rate: c T x k c T x b T y k b T y b Ax k c A T y k σ min (A) Golub, Stoll and Wathen (2008) use orthogonal tridiagonalization with GLSQR to do likewise Matrices, moments, and quadrature Golub, Minerbo and Saylor 1998 Nine ways to compute the scattering amplitude (1): Estimating c T x iteratively GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

54 Block Lanczos Orthogonal tridiagonalization is equivalent to block Lanczos on A T A with starting block ( c A T b ) Parlett 1987 GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

55 Block Lanczos Orthogonal tridiagonalization is equivalent to block Lanczos on A T A with starting block ( c A T b ) Parlett 1987 ( block Lanczos on A T Golub, Stoll, and Wathen 2008 ) ( A b with starting block c ) GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

56 Block Lanczos Orthogonal tridiagonalization is equivalent to block Lanczos on A T A with starting block ( c A T b ) Parlett 1987 ( block Lanczos on A T Golub, Stoll, and Wathen 2008 ) ( A b with starting block c ) There are two ways of spreading light. To be the candle or the mirror that reflects it. Edith Wharton GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

57 References M. A. Saunders, H. D. Simon, and E. L. Yip (1988). Two conjugate-gradient-type methods for unsymmetric linear equations, SIAM J. Numer. Anal. 25:4, L. Reichel and Q. Ye (2008). A generalized LSQR algorithm, Numer. Linear Algebra Appl. 15, G. H. Golub, M. Stoll, and A. Wathen (2008). Approximation of the scattering amplitude and linear systems, ETNA 31, GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

58 References M. A. Saunders, H. D. Simon, and E. L. Yip (1988). Two conjugate-gradient-type methods for unsymmetric linear equations, SIAM J. Numer. Anal. 25:4, L. Reichel and Q. Ye (2008). A generalized LSQR algorithm, Numer. Linear Algebra Appl. 15, G. H. Golub, M. Stoll, and A. Wathen (2008). Approximation of the scattering amplitude and linear systems, ETNA 31, Special thanks to Chris Paige, Martin Stoll, and Sou-Cheng Choi GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

59 References M. A. Saunders, H. D. Simon, and E. L. Yip (1988). Two conjugate-gradient-type methods for unsymmetric linear equations, SIAM J. Numer. Anal. 25:4, L. Reichel and Q. Ye (2008). A generalized LSQR algorithm, Numer. Linear Algebra Appl. 15, G. H. Golub, M. Stoll, and A. Wathen (2008). Approximation of the scattering amplitude and linear systems, ETNA 31, Special thanks to Chris Paige, Martin Stoll, and Sou-Cheng Choi Happy birthday Lothar! Thanks for noticing A can be rectangular! GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

60 Orthogonal reductions MINRES solvers Orth-tridiag Results Conclusions Gene is with us every day GMINRES or GLSQR? LR60/ETNA20 Apr 19 20, /26

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

CG and MINRES: An empirical comparison

CG and MINRES: An empirical comparison School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

LSMR: An iterative algorithm for least-squares problems

LSMR: An iterative algorithm for least-squares problems LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

CG versus MINRES on Positive Definite Systems

CG versus MINRES on Positive Definite Systems School of Engineering CG versus MINRES on Positive Definite Systems Michael Saunders SOL and ICME, Stanford University Joint work with David Fong Recent Advances on Optimization In honor of Philippe Toint

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG VERSUS MINRES: AN EMPIRICAL COMPARISON VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive

More information

CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG VERSUS MINRES: AN EMPIRICAL COMPARISON VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive

More information

Math 310 Final Project Recycled subspaces in the LSMR Krylov method

Math 310 Final Project Recycled subspaces in the LSMR Krylov method Math 310 Final Project Recycled subspaces in the LSMR Krylov method Victor Minden November 25, 2013 1 Introduction In many applications, it is the case that we are interested not only in solving a single

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

FGMRES for linear discrete ill-posed problems

FGMRES for linear discrete ill-posed problems FGMRES for linear discrete ill-posed problems Keiichi Morikuni Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies, Sokendai, 2-1-2 Hitotsubashi,

More information

Experiments with iterative computation of search directions within interior methods for constrained optimization

Experiments with iterative computation of search directions within interior methods for constrained optimization Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Ax=b. Zack 10/4/2013

Ax=b. Zack 10/4/2013 Ax=b Zack 10/4/2013 Iteration method Ax=b v 1, v 2 x k = V k y k Given (A, b) standard orthonormal base v 1, v 2 x k = V k y k For symmetric A, the method to generate V k is called Lanczos Method Lanczos

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Solving large sparse Ax = b.

Solving large sparse Ax = b. Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

A stable variant of Simpler GMRES and GCR

A stable variant of Simpler GMRES and GCR A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks

More information

ON EFFICIENT NUMERICAL APPROXIMATION OF THE BILINEAR FORM c A 1 b

ON EFFICIENT NUMERICAL APPROXIMATION OF THE BILINEAR FORM c A 1 b ON EFFICIENT NUMERICAL APPROXIMATION OF THE ILINEAR FORM c A 1 b ZDENĚK STRAKOŠ AND PETR TICHÝ Abstract. Let A C N N be a nonsingular complex matrix and b and c complex vectors of length N. The goal of

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007.

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007. AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007 This unit: So far: A survey of iterative methods for solving linear

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Dedicated to Michael Saunders s 70th birthday

Dedicated to Michael Saunders s 70th birthday MINIMAL RESIDUAL METHODS FOR COMPLEX SYMMETRIC, SKEW SYMMETRIC, AND SKEW HERMITIAN SYSTEMS SOU-CHENG T. CHOI Dedicated to Michael Saunders s 70th birthday Abstract. While there is no lac of efficient Krylov

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

The structure of iterative methods for symmetric linear discrete ill-posed problems

The structure of iterative methods for symmetric linear discrete ill-posed problems BIT manuscript No. (will be inserted by the editor) The structure of iterative methods for symmetric linear discrete ill-posed problems L. Dyes F. Marcellán L. Reichel Received: date / Accepted: date Abstract

More information

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Dr. ir. Gerrit C. Groenenboom 14 june - 18 june, 1993 Last revised: May 11, 2005 Contents 1 Introduction

More information

Block Krylov Space Solvers: a Survey

Block Krylov Space Solvers: a Survey Seminar for Applied Mathematics ETH Zurich Nagoya University 8 Dec. 2005 Partly joint work with Thomas Schmelzer, Oxford University Systems with multiple RHSs Given is a nonsingular linear system with

More information

AMS subject classifications. 15A06, 65F10, 65F20, 65F22, 65F25, 65F35, 65F50, 93E24

AMS subject classifications. 15A06, 65F10, 65F20, 65F22, 65F25, 65F35, 65F50, 93E24 : AN IERAIVE ALGORIHM FOR SPARSE LEAS-SQUARES PROBLEMS DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. An iterative method is presented for solving linear systems Ax = b and leastsquares problems min

More information

Combination Preconditioning of saddle-point systems for positive definiteness

Combination Preconditioning of saddle-point systems for positive definiteness Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30 compute iterates with residuals Krylov

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

COMPUTING PROJECTIONS WITH LSQR*

COMPUTING PROJECTIONS WITH LSQR* BIT 37:1(1997), 96-104. COMPUTING PROJECTIONS WITH LSQR* MICHAEL A. SAUNDERS t Abstract. Systems Optimization Laboratory, Department of EES ~4 OR Stanford University, Stanford, CA 94305-4023, USA. email:

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Dedicated to Lothar Reichel on the occasion of his 60th birthday

Dedicated to Lothar Reichel on the occasion of his 60th birthday REVISITING (k, l)-step METHODS MARTIN H. GUTKNECHT Dedicated to Lothar Reichel on the occasion of his 60th birthday Abstract. In the mid-1980s B.N. Parsons [29] and the author [10] independently had the

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION A. BOUHAMIDI, K. JBILOU, L. REICHEL, H. SADOK, AND Z. WANG Abstract. This paper is concerned with the computation

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

A QR-decomposition of block tridiagonal matrices generated by the block Lanczos process

A QR-decomposition of block tridiagonal matrices generated by the block Lanczos process 1 A QR-decomposition of block tridiagonal matrices generated by the block Lanczos process Thomas Schmelzer Martin H Gutknecht Oxford University Computing Laboratory Seminar for Applied Mathematics, ETH

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 4, pp. 1001 1021 c 2005 Society for Industrial and Applied Mathematics BREAKDOWN-FREE GMRES FOR SINGULAR SYSTEMS LOTHAR REICHEL AND QIANG YE Abstract. GMRES is a

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Updating the QR decomposition of block tridiagonal and block Hessenberg matrices generated by block Krylov space methods

Updating the QR decomposition of block tridiagonal and block Hessenberg matrices generated by block Krylov space methods Updating the QR decomposition of block tridiagonal and block Hessenberg matrices generated by block Krylov space methods Martin H. Gutknecht Seminar for Applied Mathematics, ETH Zurich, ETH-Zentrum HG,

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for Notes for 2016-03-11 Woah, We re Halfway There Last time, we showed that the QR iteration maps upper Hessenberg matrices to upper Hessenberg matrices, and this fact allows us to do one QR sweep in O(n

More information