CG and MINRES: An empirical comparison

Size: px
Start display at page:

Download "CG and MINRES: An empirical comparison"

Transcription

1 School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering Stanford University 5th International Conference on High Performance Scientific Computing March 5 9, 2012 Hanoi, Vietnam

2 2/43 Abstract For iterative solution of symmetric systems Ax = b, the conjugate gradient method (CG) is commonly used when A is positive definite, while the minimal residual method (MINRES) is typically reserved for indefinite systems. We investigate the sequence of solutions generated by each method and suggest that even if A is positive definite, MINRES may be preferable to CG if iterations are to be terminated early. The classic symmetric positive-definite system comes from the full-rank least-squares (LS) problem min Ax b. Specialization of CG and MINRES to the associated normal equation A T Ax = A T b leads to LSQR and LSMR respectively. We include numerical comparisons of these two LS solvers because they motivated this retrospective study of CG versus MINRES.

3 3/43 1 CG and MINRES The Lanczos Process Properties Backward Errors 2 LSQR and LSMR 3 LSMR Derivation Golub-Kahan bidiagonalization Properties 4 LSMR Experiments Backward Errors 5 Summary

4 Part I: CG and MINRES Iterative algorithms for Ax = b, A = A T based on the Lanczos process Krylov-subspace methods: x k = V k y k 4/43

5 5/43 Lanczos process (summary) β 1 v 1 = b AV k = V k+1 H k V k = ( v 1 v 2 v k ) α 1 β 2 β T k = 2 α β k β k α ( ) k Tk H k = β k+1 e T k

6 5/43 Lanczos process (summary) β 1 v 1 = b V k = ( v 1 v 2 v k ) α 1 β 2 β T k = 2 α β k β k α ( ) k Tk H k = β k+1 e T k AV k = V k+1 H k r k = b Ax k = β 1 v 1 AV k y k = V k+1 (β 1 e 1 H k y k ), Aim: β 1 e 1 H k y k

7 5/43 Lanczos process (summary) β 1 v 1 = b V k = ( v 1 v 2 v k ) α 1 β 2 β T k = 2 α β k β k α ( ) k Tk H k = β k+1 e T k AV k = V k+1 H k r k = b Ax k = β 1 v 1 AV k y k = V k+1 (β 1 e 1 H k y k ), Aim: β 1 e 1 H k y k Two subproblems CG T k y k = β 1 e 1 x k = V k y k MINRES min H k y k β 1 e 1 x k = V k y k

8 6/43 Common practice Ax = b, A = A T

9 6/43 Common practice Ax = b, A positive definite A indefinite A = A T Use CG Use MINRES

10 6/43 Common practice Ax = b, A positive definite A indefinite A = A T Use CG Use MINRES Experiment: CG vs MINRES on A 0

11 6/43 Common practice Ax = b, A positive definite A indefinite A = A T Use CG Use MINRES Experiment: CG vs MINRES on A 0 Hestenes and Stiefel (1952) proposed both CG and CR for A 0 and proved many properties CR MINRES when A 0 They both minimize r k = b Ax k in the Krylov subspace

12 7/43 Theoretical properties for Ax = b, A 0 CG CR (MINRES) x x k HS 1952 HS 1952 x x k A HS 1952 HS 1952 x k Steihaug 1983 Fong 2012

13 7/43 Theoretical properties for Ax = b, A 0 CG CR (MINRES) x x k HS 1952 HS 1952 x x k A HS 1952 HS 1952 x k Steihaug 1983 Fong 2012 CR (MINRES) r k HS 1952 r k / x k Fong 2012

14 8/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Backward error for square systems Ax = b An approximate solution x k is acceptable iff E, f st (A + E)x k = b + f E A α f b β

15 8/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Backward error for square systems Ax = b An approximate solution x k is acceptable iff E, f st (A + E)x k = b + f E A α f b β Smallest perturbations E, f : (Titley-Peloquin 2010) E = α A ψ x k r kx T k f = β b ψ r k E A = α r k ψ f b = β r k ψ

16 8/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Backward error for square systems Ax = b An approximate solution x k is acceptable iff E, f st (A + E)x k = b + f E A α f b β Smallest perturbations E, f : (Titley-Peloquin 2010) E = α A ψ x k r kx T k f = β b ψ r k E A = α r k ψ f b = β r k ψ Stopping rule: r k ψ α A x k + β b

17 9/43 Backward error for square systems, β = 0 E (k) = r kx T k x k 2 (A + E (k) )x k = b E (k) = r k x k Data: Tim Davis s sparse matrix collection Real, symmetric posdef examples that include b Plot log 10 E (k) for CG and MINRES

18 10/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Backward Error of CG vs MINRES on A Name:Simon_raefsky4, Dim:19779x19779, nnz: , id=7 CG MINRES 0 1 Name:Cannizzo_sts4098, Dim:4098x4098, nnz:72356, id=13 CG MINRES log( r / x ) 8 log( r / x ) iteration count x iteration count Name:Schenk_AFE_af_shell8, Dim:504855x504855, nnz: , id=11 1 CG MINRES 2 Name:BenElechi_BenElechi1, Dim:245874x245874, nnz: , id=22 0 CG MINRES log( r / x ) log( r / x ) iteration count iteration count x 10 4

19 11/43 Part II: LSQR and LSMR LSQR CG on A T Ax = A T b LSMR MINRES on A T Ax = A T b

20 12/43 What problems do LSQR and LSMR solve? solve Ax = b

21 12/43 What problems do LSQR and LSMR solve? solve Ax = b min Ax b 2

22 12/43 What problems do LSQR and LSMR solve? solve Ax = b min Ax b 2 min st x Ax = b

23 12/43 What problems do LSQR and LSMR solve? solve Ax = b min Ax b 2 min st x Ax = b ( ) ( min A b x λi 0) 2

24 12/43 What problems do LSQR and LSMR solve? solve Ax = b min Ax b 2 min st x Ax = b ( ) ( min A b x λi 0) 2 Properties A is rectangular (m n) and often sparse A can be an operator ( allows preconditioning) Av, A T u plus O(m + n) operations per iteration

25 13/43 Why invent another algorithm?

26 14/43 Reason one CG vs MINRES

27 15/43 Reason two Monotone convergence of residuals r k and A T r k

28 16/43 min Ax b Measure of convergence r k = b Ax k r k ˆr, A T r k 0

29 16/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G min Ax b Measure of convergence r k = b Ax k r k ˆr, A T r k 0 LSQR r k LSQR log A T r k 1.01 Name:lp fit1p, Dim:1677x627, nnz:9868, id=625 0 Name:lp fit1p, Dim:1677x627, nnz:9868, id= r log A T r iteration count iteration count

30 16/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G min Ax b Measure of convergence r k = b Ax k r k ˆr, A T r k 0 LSQR LSMR r k log A T r k 1 Name:lp fit1p, Dim:1677x627, nnz:9868, id=625 LSQR LSMR 0 Name:lp fit1p, Dim:1677x627, nnz:9868, id=625 LSQR LSMR r log A T r iteration count iteration count

31 17/43 LSMR Derivation

32 18/43 Golub-Kahan bidiagonalization Given A (m n) and b (m 1) Direct bidiagonalization U T( b A ) ( 1 V ) = ( ) ( b AV ) = U ( β 1 e 1 B )

33 18/43 Golub-Kahan bidiagonalization Given A (m n) and b (m 1) Direct bidiagonalization U T( b A ) ( 1 V ) = ( ) ( b AV ) = U ( β 1 e 1 B ) Iterative bidiagonalization Bidiag(A, b) Half a page in the 1965 Golub-Kahan SVD paper

34 19/43 Golub-Kahan bidiagonalization (2) b = U k+1 (β 1 e 1 ) AV k = U k+1 B k ( ) A T U k = V k B T Ik k 0 where α 1 β 2 α 2 B k = β k α k β k+1 U k = ( u 1 u k ) V k = ( v 1 v k )

35 20/43 Golub-Kahan bidiagonalization (3) V k spans the Krylov subspace: span{v 1,..., v k } = span{a T b, (A T A)A T b,..., (A T A) k 1 A T b}

36 20/43 Golub-Kahan bidiagonalization (3) V k spans the Krylov subspace: span{v 1,..., v k } = span{a T b, (A T A)A T b,..., (A T A) k 1 A T b} Define x k = V k y k Subproblem to solve min y k min y k r k = min β 1 e 1 B k y k (LSQR) y k ( ) A T r k = min y k β B T 1 e 1 k B k y β k+1 e T k (LSMR) k where r k = b Ax k, βk = α k β k

37 21/43 Computational and storage requirement Storage Work m n m n MINRES on A T Ax = A T b Av 1 x, v 1, v 2, w 1, w 2 8 LSQR Av, u x, v, w 3 5 LSMR Av, u x, v, h, h 3 6 where h k, h k are scalar multiples of w k, w k

38 22/43 Theoretical properties for min Ax = b LSQR LSMR x x k HS 1952 HS 1952 r r k HS 1952 HS 1952 x k Steihaug 1983 Fong 2012 r k PS 1982 Fong 2012

39 22/43 Theoretical properties for min Ax = b LSQR LSMR x x k HS 1952 HS 1952 r r k HS 1952 HS 1952 x k Steihaug 1983 Fong 2012 r k PS 1982 Fong 2012 LSQR LSMR A T r k FS 2011 A T r k / r k mostly A T r k / r k A T r k / r k

40 23/43 LSMR Experiments

41 24/43 Overdetermined systems Test Data Tim Davis, University of Florida Sparse Matrix Collection LPnetlib: Linear Programming Problems A = (Problem.A) b = Problem.c (127 problems)

42 24/43 Overdetermined systems Test Data Tim Davis, University of Florida Sparse Matrix Collection LPnetlib: Linear Programming Problems A = (Problem.A) b = Problem.c (127 problems) Solve min Ax b 2 with LSQR and LSMR

43 25/43 Backward error estimates A T Aˆx = A T b ˆr = b Aˆx exact (A + E i ) T (A + E i )x = (A + E i ) T b r = b Ax any x

44 25/43 Backward error estimates A T Aˆx = A T b ˆr = b Aˆx exact (A + E i ) T (A + E i )x = (A + E i ) T b r = b Ax any x Two estimates given by Stewart (1975 and 1977) E 1 = ext x 2 E 2 = rrt A r 2 E 1 = e x E 2 = AT r r e = ˆr r computable

45 25/43 Backward error estimates A T Aˆx = A T b ˆr = b Aˆx exact (A + E i ) T (A + E i )x = (A + E i ) T b r = b Ax any x Two estimates given by Stewart (1975 and 1977) E 1 = ext x 2 E 2 = rrt A r 2 E 1 = e x E 2 = AT r r e = ˆr r computable Theorem E LSMR 2 E LSQR 2

46 26/43 log 10 E 2 for LSQR and LSMR typical 0 Name:lp pilot ja, Dim:2267x940, nnz:14977, id=657 LSQR LSMR 1 2 log(e2) iteration count

47 27/43 log 10 E 2 for LSQR and LSMR rare 0 1 Name:lp sc205, Dim:317x205, nnz:665, id=665 LSQR LSMR 2 3 log(e2) iteration count

48 28/43 Backward error - optimal µ(x) min E E st (A + E) T (A + E)x = (A + E) T b Exact µ(x) (Waldén, Karlson, & Sun 1995, Higham 2002) [ ( C A I rrt µ(x) = σ min (C) r x r 2 )]

49 29/43 Backward error - optimal µ(x) min E E st (A + E) T (A + E)x = (A + E) T b Cheaper estimate µ(x) (Grcar, Saunders, & Su 2007) ( ) ( ) A r K = r I v = 0 x min y Ky v µ(x) = Ky x

50 29/43 Backward error - optimal µ(x) min E E st (A + E) T (A + E)x = (A + E) T b Cheaper estimate µ(x) (Grcar, Saunders, & Su 2007) ( ) ( ) A r K = r I v = 0 x min y Ky v µ(x) = Ky x r = b - A*x; p = colamd(a); eta = norm(r)/norm(x); K = [A(:,p); eta*speye(n)]; v = [r; zeros(n,1)]; [c,r] = qr(k,v,0); mutilde = norm(c)/norm(x);

51 30/43 Backward errors for LSQR typical Name:lp cre a, Dim:7248x3516, nnz:18168, id= E2 E1 Optimal 2 log(backward Error for LSQR) iteration count

52 31/43 Backward errors for LSQR rare 0 1 Name:lp pilot, Dim:4860x1441, nnz:44375, id=654 E2 E1 Optimal log(backward Error for LSQR) iteration count

53 32/43 Backward errors for LSMR typical 0 1 Name:lp ken 11, Dim:21349x14694, nnz:49058, id=638 E2 E1 Optimal 2 log(backward Error for LSMR) iteration count

54 33/43 Backward errors for LSMR rare 0 1 Name:lp ship12l, Dim:5533x1151, nnz:16276, id=688 E2 E1 Optimal 2 log(backward Error for LSMR) iteration count

55 34/43 For LSMR E 2 optimal BE almost always Typical: E 2 µ(x) Rare: E 1 µ(x) 0 1 Name:lp ken 11, Dim:21349x14694, nnz:49058, id=638 E2 E1 Optimal 0 1 Name:lp ship12l, Dim:5533x1151, nnz:16276, id=688 E2 E1 Optimal 2 2 log(backward Error for LSMR) log(backward Error for LSMR) iteration count iteration count

56 35/43 Optimal backward errors µ(x) Seem monotonic for LSMR Usually not for LSQR Typical for LSQR and LSMR Rare LSQR, typical LSMR 0 Name:lp scfxm3, Dim:1800x990, nnz:8206, id=73 LSQR LSMR 0 Name:lp cre c, Dim:6411x3068, nnz:15977, id=611 LSQR LSMR log(optimal Backward Error) 3 4 log(optimal Backward Error) iteration count iteration count

57 36/43 Optimal backward errors µ(x LSMR ) µ(x LSQR ) almost always Typical Rare 0 1 Name:lp pilot, Dim:4860x1441, nnz:44375, id=654 LSQR LSMR 0 1 Name:lp standgub, Dim:1383x361, nnz:3338, id=693 LSQR LSMR 2 log(optimal Backward Error) log(optimal Backward Error) iteration count iteration count

58 37/43 Errors in x k x LSQR x x LSMR x seems true x k x for LSMR and LSQR 2 1 Name:lp ship12l, Dim:5533x1151, nnz:16276, id=688 LSQR LSMR 1 0 Name:lp pds 02, Dim:7716x2953, nnz:16571, id=649 LSQR LSMR log x k x * 2 3 log x k x * iteration count iteration count

59 38/43 Square consistent systems Ax = b Backward error: r k x k LSQR slightly faster than LSMR in most cases 2 Name:Hamm.hcircuit, Dim:105676x105676, nnz:513072, id=152 LSQR LSMR 0 1 Name:IBM EDA.trans5, Dim:116835x116835, nnz:749800, id=176 LSQR LSMR log( r / x ) 4 6 log( r / x ) iteration count x iteration count

60 39/43 Underdetermined systems Infinitely many solutions Ax = b Unique solution min x st Ax = b Theorem LSQR and LSMR both return the minimum-norm solution 0 1 Name:U lp pds 10, Dim:16558x49932, nnz:107605, id=418 LSQR LSMR 2 Name:U lp israel, Dim:174x316, nnz:2443, id=336 LSQR LSMR log( r / x ) 4 5 log( r / x ) iteration count iteration count

61 40/43 Summary

62 41/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Theoretical properties for Ax = b, A 0 CG and MINRES x x k x x k A x k

63 41/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Theoretical properties for Ax = b, A 0 CG and MINRES x x k x x k A x k MINRES r k r k / x k r k /(α A x k + β b )

64 41/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Theoretical properties for Ax = b, A 0 CG and MINRES x x k x x k A x k MINRES r k r k / x k r k /(α A x k + β b ) For MINRES, backward errors are monotonic safe to stop early

65 42/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Theoretical properties for min Ax b LSQR and LSMR x x k r r k r k x k x k min-length x if rank(a) < n

66 42/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Theoretical properties for min Ax b LSQR and LSMR x x k r r k r k x k x k min-length x if rank(a) < n LSMR A T r k A T r k / r k almost always optimal BE almost always

67 42/43 S T A N F O R D U N I V E R S I T Y C O M P U T A T I O N A L A N D M A T H E M A T I C A L E N G I N E E R I N G Theoretical properties for min Ax b LSQR and LSMR x x k r r k r k x k x k min-length x if rank(a) < n LSMR A T r k A T r k / r k almost always optimal BE almost always For LSMR, optimal backward errors seem monotonic safe to stop early

68 43/43 References: LSMR: An iterative algorithm for sparse least-squares problems David Fong and Michael Saunders, SISC 2011 CG versus MINRES: An empirical comparison David Fong and Michael Saunders, SQU Journal for Science 2012 Kindest thanks: Georg Bock and colleagues Phan Thanh An and colleagues

CG versus MINRES on Positive Definite Systems

CG versus MINRES on Positive Definite Systems School of Engineering CG versus MINRES on Positive Definite Systems Michael Saunders SOL and ICME, Stanford University Joint work with David Fong Recent Advances on Optimization In honor of Philippe Toint

More information

LSMR: An iterative algorithm for least-squares problems

LSMR: An iterative algorithm for least-squares problems LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative

More information

Experiments with iterative computation of search directions within interior methods for constrained optimization

Experiments with iterative computation of search directions within interior methods for constrained optimization Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering

More information

AMS subject classifications. 15A06, 65F10, 65F20, 65F22, 65F25, 65F35, 65F50, 93E24

AMS subject classifications. 15A06, 65F10, 65F20, 65F22, 65F25, 65F35, 65F50, 93E24 : AN IERAIVE ALGORIHM FOR SPARSE LEAS-SQUARES PROBLEMS DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. An iterative method is presented for solving linear systems Ax = b and leastsquares problems min

More information

CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG VERSUS MINRES: AN EMPIRICAL COMPARISON VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive

More information

CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG VERSUS MINRES: AN EMPIRICAL COMPARISON VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps.

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps. Conjugate Gradient algorithm Need: A symmetric positive definite; Cost: 1 matrix-vector product per step; Storage: fixed, independent of number of steps. The CG method minimizes the A norm of the error,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

F as a function of the approximate solution x

F as a function of the approximate solution x ESTIMATES OF OPTIMAL BACKWARD PERTURBATIONS FOR LINEAR LEAST SQUARES PROBLEMS JOSEPH F. GRCAR, MICHAEL A. SAUNDERS, AND ZHENG SU Abstract. Numerical tests are used to validate a practical estimate for

More information

Math 310 Final Project Recycled subspaces in the LSMR Krylov method

Math 310 Final Project Recycled subspaces in the LSMR Krylov method Math 310 Final Project Recycled subspaces in the LSMR Krylov method Victor Minden November 25, 2013 1 Introduction In many applications, it is the case that we are interested not only in solving a single

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Zdeněk Strakoš and Petr Tichý Institute of Computer Science AS CR, Technical University of Berlin. Emory

More information

Ax=b. Zack 10/4/2013

Ax=b. Zack 10/4/2013 Ax=b Zack 10/4/2013 Iteration method Ax=b v 1, v 2 x k = V k y k Given (A, b) standard orthonormal base v 1, v 2 x k = V k y k For symmetric A, the method to generate V k is called Lanczos Method Lanczos

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Zdeněk Strakoš and Petr Tichý Institute of Computer Science AS CR, Technical University of Berlin. International

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Matrices and Moments: Least Squares Problems

Matrices and Moments: Least Squares Problems Matrices and Moments: Least Squares Problems Gene Golub, SungEun Jo, Zheng Su Comparative study (with David Gleich) Computer Science Department Stanford University Matrices and Moments: Least Squares Problems

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

COMPUTING PROJECTIONS WITH LSQR*

COMPUTING PROJECTIONS WITH LSQR* BIT 37:1(1997), 96-104. COMPUTING PROJECTIONS WITH LSQR* MICHAEL A. SAUNDERS t Abstract. Systems Optimization Laboratory, Department of EES ~4 OR Stanford University, Stanford, CA 94305-4023, USA. email:

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Solving the normal equations system arising from interior po. linear programming by iterative methods

Solving the normal equations system arising from interior po. linear programming by iterative methods Solving the normal equations system arising from interior point methods for linear programming by iterative methods Aurelio Oliveira - aurelio@ime.unicamp.br IMECC - UNICAMP April - 2015 Partners Interior

More information

2017 Society for Industrial and Applied Mathematics

2017 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 39, No. 5, pp. S24 S46 2017 Society for Industrial and Applied Mathematics GENERALIZED HYBRID ITERATIVE METHODS FOR LARGE-SCALE BAYESIAN INVERSE PROBLEMS JULIANNE CHUNG AND ARVIND

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Solutions of Linear system, vector and matrix equation

Solutions of Linear system, vector and matrix equation Goals: Solutions of Linear system, vector and matrix equation Solutions of linear system. Vectors, vector equation. Matrix equation. Math 112, Week 2 Suggested Textbook Readings: Sections 1.3, 1.4, 1.5

More information

Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems

Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems Per Christian Hansen Technical University of Denmark Ongoing work with Kuniyoshi Abe, Gifu Dedicated to Dianne P. O Leary

More information

Combination Preconditioning of saddle-point systems for positive definiteness

Combination Preconditioning of saddle-point systems for positive definiteness Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30 compute iterates with residuals Krylov

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks

More information

Residual statics analysis by LSQR

Residual statics analysis by LSQR Residual statics analysis by LSQR Residual statics analysis by LSQR Yan Yan, Zhengsheng Yao, Gary F. Margrave and R. James Brown ABSRAC Residual statics corrections can be formulated as a linear inverse

More information

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Backward perturbation analysis for scaled total least-squares problems

Backward perturbation analysis for scaled total least-squares problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 009; 16:67 648 Published online 5 March 009 in Wiley InterScience (www.interscience.wiley.com)..640 Backward perturbation analysis

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition 6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Algorithmen zur digitalen Bildverarbeitung I

Algorithmen zur digitalen Bildverarbeitung I ALBERT-LUDWIGS-UNIVERSITÄT FREIBURG INSTITUT FÜR INFORMATIK Lehrstuhl für Mustererkennung und Bildverarbeitung Prof. Dr.-Ing. Hans Burkhardt Georges-Köhler-Allee Geb. 05, Zi 01-09 D-79110 Freiburg Tel.

More information

On the choice of abstract projection vectors for second level preconditioners

On the choice of abstract projection vectors for second level preconditioners On the choice of abstract projection vectors for second level preconditioners C. Vuik 1, J.M. Tang 1, and R. Nabben 2 1 Delft University of Technology 2 Technische Universität Berlin Institut für Mathematik

More information

Lecture 10b: iterative and direct linear solvers

Lecture 10b: iterative and direct linear solvers Lecture 10b: iterative and direct linear solvers MATH0504: Mathématiques appliquées X. Adriaens, J. Dular Université de Liège November 2018 Table of contents 1 Direct methods 2 Iterative methods Stationary

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview

More information

Lecture 9 Least Square Problems

Lecture 9 Least Square Problems March 26, 2018 Lecture 9 Least Square Problems Consider the least square problem Ax b (β 1,,β n ) T, where A is an n m matrix The situation where b R(A) is of particular interest: often there is a vectors

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen

More information

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Seiji Fujino, Takashi Sekimoto Abstract GPBiCG method is an attractive iterative method for the solution

More information

Computing approximate PageRank vectors by Basis Pursuit Denoising

Computing approximate PageRank vectors by Basis Pursuit Denoising Computing approximate PageRank vectors by Basis Pursuit Denoising Michael Saunders Systems Optimization Laboratory, Stanford University Joint work with Holly Jin, LinkedIn Corp SIAM Annual Meeting San

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Testing matrix function algorithms using identities. Deadman, Edvin and Higham, Nicholas J. MIMS EPrint:

Testing matrix function algorithms using identities. Deadman, Edvin and Higham, Nicholas J. MIMS EPrint: Testing matrix function algorithms using identities Deadman, Edvin and Higham, Nicholas J. 2014 MIMS EPrint: 2014.13 Manchester Institute for Mathematical Sciences School of Mathematics The University

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Stopping criteria for iterations in finite element methods

Stopping criteria for iterations in finite element methods Stopping criteria for iterations in finite element methods M. Arioli, D. Loghin and A. J. Wathen CERFACS Technical Report TR/PA/03/21 Abstract This work extends the results of Arioli [1], [2] on stopping

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt

More information

Study on Preconditioned Conjugate Gradient Methods for Solving Large Sparse Matrix in CSR Format

Study on Preconditioned Conjugate Gradient Methods for Solving Large Sparse Matrix in CSR Format Study on Preconditioned Conjugate Gradient Methods for Solving Large Sparse Matrix in CSR Format (Course Project of ECE697NA) Lin Du, Shujian Liu April 30, 2015 Contents 1 Introdution 1 1.1 Methods for

More information

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007.

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007. AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007 This unit: So far: A survey of iterative methods for solving linear

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information