Solving large linear systems with multiple right hand sides

Size: px
Start display at page:

Download "Solving large linear systems with multiple right hand sides"

Transcription

1 Solving large linear systems with multiple right hand sides Julien Langou CERFACS Toulouse France. Solving large linear systems with multiple right hand sides p. 1/12

2 Motivations In electromagnetism design it is necessary to be able to predict the currents induced by an incident plane wave on a given object. Using the boundary element method on the Maxwell equation this amounts to solve the linear system where represents the impedance of the object ( Ohm ) represents the incident field ( Volt ) represents the current on the object ( Ampere ) Solving large linear systems with multiple right hand sides p. 2/12

3 Antenna design Representation of the electric current due to an antenna on the Citroën C5 car. Solving large linear systems with multiple right hand sides p. 3/12

4 Monostatic radar cross section Radar cross section airbus 2366 phi = 30 o theta = 0 o :90 o vertical polarization nofmm direct solver RCS theta Monostatic Radar Cross Section (RCS) for the Airbus Solving large linear systems with multiple right hand sides p. 4/12

5 Motivation This class of problem naturally leads to that is to say solve a linear system with multiple right hand sides. Solving large linear systems with multiple right hand sides p. 5/12

6 Outline Study of the Gram Schmidt algorithm and its variants Implementation of iterative methods The electromagnetism applications Solving large linear systems with multiple right hand sides p. 6/12

7 Study of the Gram Schmidt algorithm and its variants Solving large linear systems with multiple right hand sides p. /12

8 The GMRES method The GMRES method (Saad & Schultz 86) gives at each setp the approximate solution that solves the least squares problem where is the Krylov space Span Solving large linear systems with multiple right hand sides p. 8/12

9 Link with QR factorization To summarize the GMRES methods needs a matrix vector product operation (and also a preconditioner operation) an orthogonalization scheme Solving large linear systems with multiple right hand sides p. 9/12

10 Gram Schmidt algorithm Starting from with full rank the Gram Schmidt algorithm computes and so as is upper triangular with positive elements on the diagonal Solving large linear systems with multiple right hand sides p. 10/12

11 Gram Schmidt algorithm Jörgen P. Gram Erhard Schmidt for do end for Solving large linear systems with multiple right hand sides p. 11/12

12 Outline of the first part When modified GramSchmidt generates a well conditioned set of vectors Reorthogonalization issues: twice is enough Reorthogonalization issues: about selective reorthogonalization criterion Reorthogonalization issues: a posteriori reorthogonalization algorithm in the modified Gram Schmidt algorithm Solving large linear systems with multiple right hand sides p. 12/12

13 Classical/Modified Classical Gram Schmidt (CGS) for do end for Modified Gram Schmidt (MGS) for do end for Solving large linear systems with multiple right hand sides p. 13/12

14 An experimental fact Let take is the 12 by 12 Hilbert matrix with elements hilb It is an example of a ill conditioned matrix we have. is far from being orthogonal. Solving large linear systems with multiple right hand sides p. 14/12

15 Round off errors Computation in floating point arithmetic implies in general round off error. Standard exists to control these errors we work here with the IEEE 54 standard fl op op Solving large linear systems with multiple right hand sides p. 15/12

16 Björck ( 6) MGS computes with full rank with singular values : so as where and is the unit round off. These results hold under the assumptions : Solving large linear systems with multiple right hand sides p. 16/12

17 An experimental fact Let take is the 12 by 12 Hilbert matrix with elements It is an example of a ill conditioned matrix we have. However is far from being orthogonal. is well conditioned. Solving large linear systems with multiple right hand sides p. 1/12

18 Giraud Langou ( 02) IMAJNA Let us define the polar factor of ( ) then Schönemann ( 66) Higham ( 94) showed Björck ( 6) showed that This implies that Solving large linear systems with multiple right hand sides p. 18/12

19 Giraud Langou ( 02) IMAJNA Solving large linear systems with multiple right hand sides p. 19/12

20 Two MGS loops MGS MGS Solving large linear systems with multiple right hand sides p. 20/12

21 for do for do for do end for end for end for Solving large linear systems with multiple right hand sides p. 21/12 TMGS / MGS2 for do for do for do end for end for end for

22 MGS2( ) for do for for do do end for if end for then reorthogonalize endif end for Solving large linear systems with multiple right hand sides p. 22/12

23 Giraud Langou Rozložník ( 01) twice is enough for CGS2 (and MGS2 as well) if is not too ill conditioned Sketch of the proof For all we have and then by induction on. Solving large linear systems with multiple right hand sides p. 23/12

24 ! ' Giraud and Langou ( 03) SISC A rather comonly used criterion (Rutishauser ( 6) Gander ( ) Ruhe( 83) Hoffmann ( 89) Dax ( 00)) Giraud Langou ( 03) %! #! "!$# no proof exists... Abdelmaleck ( 2) & Kiełbasiński ( 4)! &! # Daniel Gragg Kaufman and Stewart ( 6) ( where. Solving large linear systems with multiple right hand sides p. 24/12

25 + * ) + * * * * * * * * Giraud Langou ( 03) SISC matrix MGS2( ) MGS2 CGS2( ) CGS2 Solving large linear systems with multiple right hand sides p. 25/12

26 + * ) + * * * * * * * * Giraud Langou ( 03) SISC matrix MGS2( ) MGS2 CGS2( ) CGS2 Solving large linear systems with multiple right hand sides p. 26/12

27 + * ) + * * * * * * * * Giraud Langou ( 03) SISC matrix MGS2( ) MGS2 CGS2( ) CGS2 Solving large linear systems with multiple right hand sides p. 2/12

28 Blindy MGS gives and Solving large linear systems with multiple right hand sides p. 28/12

29 Blindy MGS given by blindy MGS on. Solving large linear systems with multiple right hand sides p. 29/12

30 Giraud Gratton Langou ( 03) Singular values study. under the assumption for all E.g. 01 / /. 3 Solving large linear systems with multiple right hand sides p. 30/12

31 Proof Follow Björck and Paige ( 92) and instead of doing do Solving large linear systems with multiple right hand sides p. 31/12

32 Interpretation in term of rank. A 2 c u K(A) s n (A) Singular values of cu Singular values of Solving large linear systems with multiple right hand sides p. 32/12

33 How to find? the rank 1 case Run MGS to have and. Form and find its singular value decomposition such that. Compute and. Form : This algorithm needs flops! For comparison we recall that a complete reorthogonalization loop costs flops. Solving large linear systems with multiple right hand sides p. 33/12

34 * * * * Application to iterative methods # iter Cetaf Airbus Sphere Almond Characteristics of (a) cetaf (b) airbus (c) sphere (d) almond Solving large linear systems with multiple right hand sides p. 34/12

35 Application to iterative methods MGS MGS2( ) a posteriori reorth. Cetaf Airbus Sphere Almond Residual errors Solving large linear systems with multiple right hand sides p. 35/12

36 Application to iterative methods MGS MGS2( ) a posteriori reorth. Cetaf Airbus Sphere Almond Orthogonality Solving large linear systems with multiple right hand sides p. 36/12

37 Summary MGS generates a well conditioned set of vectors twice is enough a new selective reorthogonalization criterion has been exhibited counter example matrices for the are given a new a posteriori reorthogonalization schemes is provided criterion Solving large linear systems with multiple right hand sides p. 3/12

38 Future work the singular case has not been treated (RRQR SVD...) currently we are interested in CGS as well as the study of Gram Schmidt algorithm with non Euclidean scalar product Solving large linear systems with multiple right hand sides p. 38/12

39 Implementation of iterative methods Solving large linear systems with multiple right hand sides p. 39/12

40 The electromagnetism applications Solving large linear systems with multiple right hand sides p. 40/12

41 9 9 9 Motivations Boundary element method in the Maxwell equations that leads to complex dense and huge linear systems to solve where is accessed via matrix vector products using the fast multipole method a Krylov solver is used Previous work: Frobenius norm minimizer preconditioner flexible GMRES solver (inner GMRES preconditioner) spectral low rank update preconditioner still problematic to have a complete monostatic radar cross section. Solving large linear systems with multiple right hand sides p. 41/12

42 Outline of the third part techniques to improve one right hand side solvers for multiple right hand sides problem seed GMRES algorithm: two cases two problems and two remedies linear dependencies of the right hand sides in the electromagnetism context. Solving large linear systems with multiple right hand sides p. 42/12

43 Collaborators This presentation uses results and codes provided by Guillaume Alléon EADS Guillaume Sylvand (INRIA(Sophia Antipolis) CERMICS) Bruno Carpentieri and Émeric Martin. Iain S. Duff (CERFACS RAL) and Luc Giraud (CERFACS) supervise the work at CERFACS. Francis Collino is also working with us. Solving large linear systems with multiple right hand sides p. 43/12

44 The single right hand side solvers increase the amount of work in the preconditioner in order to increase its efficiency. In our case for example we can 1.a. increase the density of the Frobenius norm minimizer preconditioner (Bruno) 1.b. add a spectral low rank update that shifts the smallest eigenvalue close to one (Émeric) Solving large linear systems with multiple right hand sides p. 44/12

45 : : : : ; : : The single right hand side solvers increase the amount of work in the preconditioner in order to increase its efficiency find an initial approximate solution for the current system previous systems solved 2.a. An obvious strategy is to take from the 2.b. Another strategy is to solve the least squares problem? then take the initial Solving large linear systems with multiple right hand sides p. 45/12

46 The single right hand side solvers 10 0 convergence of full GMRES on the Airbus 2366 (30 o 30 o ) without initial guess r n / b iterations Solving large linear systems with multiple right hand sides p. 46/12

47 The single right hand side solvers 10 0 convergence of full GMRES on the Airbus 2366 (30 o 30 o ) without initial guess with initial guess r n / b iterations Solving large linear systems with multiple right hand sides p. 4/12

48 The single right hand side solvers increase the amount of work in the preconditioner in order to increase its efficiency find an initial approximate solution for the current system previous systems solved from the gathered multiple GMRES iterations Solving large linear systems with multiple right hand sides p. 48/12

49 9 9 9 The almond case A first test case: the almond and CFIE formulation (JINA 2002) Solving large linear systems with multiple right hand sides p. 49/12

50 The almond case iterations elapse time (s) seed GMRES GMRES with initial guess strategy 2 iterations and elapsed time (s) on (Compaq CERFACS). processors Solving large linear systems with multiple right hand sides p. 50/12

51 The almond case iterations elapse time (s) seed GMRES GMRES with initial guess strategy 2 iterations and elapsed time (s) on (Compaq CERFACS). processors Solving large linear systems with multiple right hand sides p. 51/12

52 The almond case elapsed time (s) number of iterations elapsed time (s) iterations w Elapsed time and number iterations versus the size of the restart for restart seed GMRES. Solving large linear systems with multiple right hand sides p. 52/12

53 The almond case iterations elapse time (s) seed GMRES GMRES with initial guess strategy 2 seed GMRES ( ) iterations and elapsed time (s) on (Compaq CERFACS). processors Solving large linear systems with multiple right hand sides p. 53/12

54 9 9 9 The cobra case A second test case: the cobra and EFIE formulation Solving large linear systems with multiple right hand sides p. 54/12

55 GH / G E E E F F D The cobra case ACB default guess strat. 2 seed GMRES iter iter (0.00.0) (0.50.0) (1.00.0) (1.50.0) (2.00.0) (2.50.0) (3.00.0) (3.50.0) (4.00.0) (4.50.0) (5.00.0) (5.50.0) iter Solving large linear systems with multiple right hand sides p. 55/12

56 GH / G E E E F F D The cobra case ACB default guess strat. 2 seed GMRES iter iter (0.00.0) (0.50.0) (1.00.0) (1.50.0) (2.00.0) (2.50.0) (3.00.0) (3.50.0) (4.00.0) (4.50.0) (5.00.0) (5.50.0) iter Solving large linear systems with multiple right hand sides p. 56/12

57 The cobra case 10 0 seed GMRES GMRES with initial guess strategy Comparison between the seed GMRES method and the GMRES method with initial guess strategy 2. Solving large linear systems with multiple right hand sides p. 5/12

58 The cobra case Seed moral in le Cid Ô rage! Ô désespoir! Ô vieillesse ennemie! N aije donc tant tant vécu que pour cette infamie? Ô Anger! Ô despair! Ô ennemy of age! Should I have lived so much for such infamy? Corneile (1682) Le Cid. Solving large linear systems with multiple right hand sides p. 58/12

59 Seed GMRES + Spectral LRU 10 0 seed GMRES seed GMRES with spectral low rank update (15) right hand sides Solving large linear systems with multiple right hand sides p. 59/12

60 Seed GMRES + Spectral LRU 10 0 seed GMRES seed GMRES with spectral low rank update (15) iterations Solving large linear systems with multiple right hand sides p. 60/12

61 Seed GMRES + Spectral LRU in a race starting close from the arrival and running fast ensures to finish first! Solving large linear systems with multiple right hand sides p. 61/12

62 Seed GMRES + Spectral LRU in a race starting close from the arrival and running fast ensures to finish first! Luc Giraud ( my thesis advisor). Solving large linear systems with multiple right hand sides p. 62/12

63 9 9 Linear dependencies RHS sphere of 1 meter lightened by different plane waves Airbus lightened by different plane waves e+08 Hz e+09 Hz e+09 Hz e+09 Hz e+09 Hz 4.52e+09 Hz 6.858e+09 Hz Singular values distribution of. Solving large linear systems with multiple right hand sides p. 63/12

64 I / J 6 6 M 8 M 5 L 5 6 L K 8 6 L K 6 M L 6 6 M 5 the size in wavelengths dof cm L GHz CL K 8 cetaf cm CL GHz KL M6 sphere cm GHz L sphere cm GHz ML 5 6 sphere cm 8L GHz L M M sphere cm L GHz L Airbus cm GHz 6L 5 K Airbus cm L GHz Airbus cm L GHz L M cobra cm CL GHz L K 8 6 cobra cm MCL GHz 5L almond cm 8 L GHz 6 L K almond Solving large linear systems with multiple right hand sides p. 64/12

65 N Harmonic spheric decomposition where is such that the errors done by representing the right hand sides in the basis of the spherical harmonic is. This is joint work with Francis Collino. Solving large linear systems with multiple right hand sides p. 65/12

66 O I / P J 6 6 M 8 M 5 L 5 6 L K 8 6 L K 6 M L 6 6 M 5 O P T Comparison of and dof cm L GHz CL K 8 cetaf cm CL GHz KL M6 sphere cm GHz L sphere cm GHz ML 5 6 sphere cm 8L GHz CL M M sphere cm CL GHz L Airbus cm GHz 6L 5 K Airbus cm L GHz Airbus cm L GHz L M cobra cm L GHz L K 8 6 cobra cm ML GHz 5L almond cm 8 L GHz 6 L K almond the number of singular values we compare with greater than and compute QSR We take U. Solving large linear systems with multiple right hand sides p. 66/12

67 O I / P J 6 6 M5 M 8 M 5 L 5 6 L K 8 6 L K 6 M L 6 6 M 5 O P T Comparison of and dof cm L GHz CL K 8 cetaf 6 6 cm CL GHz KL M6 sphere 6 5 cm GHz L sphere 6 cm GHz ML 5 6 sphere cm 8L GHz L M M sphere cm L GHz L Airbus cm GHz 6L 5 K Airbus cm L GHz Airbus cm L GHz L M cobra cm CL GHz L K 8 6 cobra cm ML GHz 5L scriptsize almond cm 8 L GHz 6 L K almond the number of singular values we compare with greater than and compute QSR We take U. Solving large linear systems with multiple right hand sides p. 6/12

68 O I / P J 6 6 M5 M 8 M 5 L 5 6 L K 8 6 L MK K 6 M L 5 M6 6 M M 5 O P T Comparison of and dof 6 cm L GHz CL K 8 cetaf 6 6 cm CL GHz KL M6 sphere 6 5 cm GHz L sphere 6 cm GHz ML 5 6 sphere cm 8L GHz CL M M sphere M cm CL GHz L Airbus K cm GHz 6L 5 K Airbus 5 cm L GHz Airbus cm L GHz L M cobra cm L GHz L K 8 6 cobra 6 cm ML GHz 5L almond 5 6 cm 8 L GHz 6 L K almond the number of singular values we compare with greater than and compute QSR We take U. Solving large linear systems with multiple right hand sides p. 68/12

69 W V Linear dependency of RHS To solve the systems since we can write with 1. we solve the system with the right hand sides and then update the solutions with. 2. with block GMRES algorithm we minimize the right hand sides on the block Krylov space constructed with. Solving large linear systems with multiple right hand sides p. 69/12

70 9 9 9 On the Airbus GMRES GMRES with SpU Block GMRES SVD (49) Block GMRES SVD (49) SpU 500 θ For each right hand sides we plot the number of iterations (matrix vector product) required to converge. The test example is the Airbus Solving large linear systems with multiple right hand sides p. 0/12

71 Results DOF # RHS # SVD # it avr. # it avr. # it SVD classic coated cone sphere almond cobra LRU(15) Airbus (5e2) Solving large linear systems with multiple right hand sides p. 1/12

72 Summary adapting one right hand side solver to the multiple right hand sides case is worthly to be considered seed GMRES algorithm may greatly be improved by restart or spectral low rank update linear dependencies of the right hand sides is systematic in the electromagnetism context SVD preprocessing and use with the block GMRES algorithm is the best method at that moment Solving large linear systems with multiple right hand sides p. 2/12

73 Prospectives (1) block/seed set the vectors in core to gain in the orthogonalization process restart in block GMRES has the same catastrophic effect (in our case) than restart in GMRES and prevents from convergence on hard cases block GMRES with restart and spectral low rank update is therefore tested on big test cases where full block GMRES is not affordable anymore. Also we are thinking on how implementing block GMRES DR algorithm in the code since GMRES DR works fine. Solving large linear systems with multiple right hand sides p. 3/12

74 Prospectives (2) relax Relaxing the accuracy of the matrix vector product operation during the convergence of GMRES Bouras Frayssé and Giraud ( 00) strategy 10 0 prec 1 prec 2 prec Solving large linear systems with multiple right hand sides p. 4/12

75 Jury Guillaume Alléon Åke Björck Iain S. Duff Luc Giraud Gene H. Golub Gérard Meurant Chris C. Paige Yousef Saad Director of project EADS CCR Professor Linköping University Project leader CERFACS et RAL Senior researcher CERFACS Professor Stanford University Research director CEA Professor McGill University Professor University of Minnesota Solving large linear systems with multiple right hand sides p. 5/12

76 Study of the Gram Schmidt algorithm and its variants Solving large linear systems with multiple right hand sides p. 6/12

77 Classical/Modified Classical GramSchmidt (CGS) for do Modified GramSchmidt (MGS) for do for do for do end for end for end for end for Solving large linear systems with multiple right hand sides p. /12

78 The Arnoldi algorithm The Arnoldi method provides at each step orthonormal basis for the Krylov space that verifies an So find find and set that minimizes that minimizes Solving large linear systems with multiple right hand sides p. 8/12

79 Björck ( 6) MGS computes with full rank with singular values : so as where are constants depending on and the details of the arithmetic and is the unit round off. Solving large linear systems with multiple right hand sides p. 9/12

80 + + Björck ( 6) and These results hold under the assumptions : and Solving large linear systems with multiple right hand sides p. 80/12

81 Gram Schmidt algorithm for do for do end for cost: flops end for Solving large linear systems with multiple right hand sides p. 81/12

82 Unitary transformations GRAM SCHMIDT VS GIVENS HOUSEHOLDER Solving large linear systems with multiple right hand sides p. 82/12

83 BLLL B Householder & Givens Idea : find a series of unitary matrix the th first columns of T such that for all are upper triangular. Choice : typically we use Givens rotations: Householder reflections: Cost : for where for and.. Solving large linear systems with multiple right hand sides p. 83/12

84 Advantages & drawbacks Advantages : possibility to exploit the structure of (e.g. Hessenberg Givens rotations) if only the factor is needed then the number of flops is less than Gram Schmidt algorithm the factor has columns orthornormal up to machine precision (Wilkinson 63) Drawbacks : to obtain doubles the cost of the method the fact that unitary transformations implies a factor with orthonormal columns highly depends on the dot product used Solving large linear systems with multiple right hand sides p. 84/12

85 + Björck and Paige ( 92) with full rank with singular values : MGS computes such that and where and is the unit round off. This results holds under the assumptions and Solving large linear systems with multiple right hand sides p. 85/12

86 X + X Daniel and al. ( 6) Let take such that ( and we consider the serie of vectors Then Daniel and al. showed that where is defined via. (e.g.) Solving large linear systems with multiple right hand sides p. 86/12

87 + Giraud Langou and Rozložník ( ζ ζ ζ ζ ζ ζ Solving large linear systems with multiple right hand sides p. 8/12

88 How to find? We have expressed as a sum of to low rank matrix of rank. The choice of is a compromise between accuracy and efficiency. Solving large linear systems with multiple right hand sides p. 88/12

89 2D case GOLDORAK ( ). Solving large linear systems with multiple right hand sides p. 89/12

90 9 GH Y 2D case We run full GMRES with MGS without preconditionner and a righthand side corresponding to an incident wave at. The tolerance is fixed to Solving large linear systems with multiple right hand sides p. 90/12

91 X X X GH Y 2! Y3. G Y 2[! B Y3. 2D example From Greenbaum Rozložník and Strakoš ( 9) we know that ZF Solving large linear systems with multiple right hand sides p. 91/12

92 X GH Y 2! Y3. G Y 2[! B Y3. [ \ 2D example so from Björck (196) we can check that the loss of orthogonality is proportionnal to ZF Y ( [ Y Solving large linear systems with multiple right hand sides p. 92/12

93 X X X X GH Y 2! Y3. G Y 2[ ] Y. B h fg _ h f a 2D example As the second smallest singular value of X and are wellconditionned we deduced that is reasonnably big ZF 10 6 a$bced ^`_ jlk ^i_ Solving large linear systems with multiple right hand sides p. 93/12

94 X X X LLS problems with multiple RHS with MGS.. when using an orthonormal basis.. Solving large linear systems with multiple right hand sides p. 94/12

95 8 6 T T L M M T T T L 8 5 T L 8 T T L Application to iterative methods ) ) MGS2( ) MGS2( MGS2( 5 ML T n o L KL n m L n m ML Cetaf L n F L n F KCL L n m 5CL Airbus 5 L T n U F KL 8L T n o L n m L Sphere L n F 6 L T n F 5L L n m ML Almond p ' ( p '. \ 3 Solving large linear systems with multiple right hand sides p. 95/12

96 Implementation of iterative methods Solving large linear systems with multiple right hand sides p. 96/12

97 Outline GMRES DR seed GMRES block GMRES Solving large linear systems with multiple right hand sides p. 9/12

98 GMRES DR new need vectors of size. whereas if we first perform the LU decomposition of and then do new ; we only need vectors. Solving large linear systems with multiple right hand sides p. 98/12

99 The electromagnetism applications Solving large linear systems with multiple right hand sides p. 99/12

100 One right hand side solvers 0.8 spectrum Cetaf preconditioned with default Frobenius norm minimizer preconditioner Cetaf preconditioned with default Frobenius norm min Solving large linear systems with multiple right hand sides p. 100/12

101 E One right hand side solvers assembly application procs FMM Frob SLRU FMM Frob SLRU cetaf (20) (20) 4 Airbus (20) (20) 4 cobra (15) (15) 8 almond (60) (60) 8 almond (20) (20) 8 Time of assembly and application of the FMM (prec3) the Frobenius preconditioner and the spectral update (the number of shifted eigenvalues is given in bracket). Solving large linear systems with multiple right hand sides p. 101/12

102 One right hand side solvers Frob SpU (20) # full gmres iterations illuminating angle Airbus 2366 Solving large linear systems with multiple right hand sides p. 102/12

103 E 8 M L E K K M 5 L Gathered multiple GMRES unitary FMM (s) unitary Precond (s) FMM &Precond total gathered GMRES ( ) 5L 8 L GMRES with zero as initial guess ML 5 L L (a) Airbus. unitary FMM (s) unitary Precond (s) FMM &Precond total GMRES gathered ( ) 6 L L ML GMRES with strategy 2 for the initial guess L CL (b) coated cone sphere 604. Solving large linear systems with multiple right hand sides p. 103/12

104 D F / T D F r q r B q s q F D F / h za { h a F 3 [ T Dz h a F r r q s The seed GMRES algorithm (1) We first run GMRES to solve the linear system solving the linear least squares problem. This amounts to D F / tuwvx y c where is the Krylov space of size built with an Arnoldi process on using the starting vector. In most applications the computational burden lies in the matrix vector products and the scalar products required to solve this linear least squares problem. In seed GMRES the subsequent right hand sides benefit from this work. An effective initial guess for the system is obtained by solving the same linear least squares problem but with another right hand side namely L Dz / tu yvxc The additional cost is dot products and zaxpy operations per right hand sides. Solving large linear systems with multiple right hand sides p. 104/12

105 } } } The seed GMRES algorithm (2) 10 0 K 1 K K 3 K 4 K K 6 K K 8 K9 K θ Norm of the residuals after each minimization on the Krylov spaces. The test example is the cobra Solving large linear systems with multiple right hand sides p. 105/12

106 ~ A / T 8 ~ 6 ~ 8 8 ~ 8 ~ ~ ~ ~ ~ A L 8 5 E The almond case Starting the seed GMRES method from the right hand side following sequence for the number of iterations for the first we obtain the right hand sides: ~ 8L ~ 8CL ~ 8 L ~ 8 L ~ 8L iter Solving large linear systems with multiple right hand sides p. 106/12

107 ~ A / T ~ 8 ~ ~ ~ ~ ~ A L 8 5 E ~ 8 ~ ~ ~ ~ ~ A L 8 5 E ~ A / T / The almond case Starting the seed GMRES method from the right hand side following sequence for the number of iterations for the first we obtain the righthand sides: ~ 8L ~ 8 CL ~ 8 L ~ 8 L ~ 8L iter We define the transient phase by the sequence and ; the stationnary phase is the remaining where the number of iterations does not change much. It is well defined by a plateau at a value equal to the average number of iterations for the right hand sides solved in this stationnary phase. Finally we obtain the following model law: ~ 8L ~ 8 CL ~ 8 L ~ 8 L ~ 8L iter 5 6 L 5 6 L 5 6 CL 5 6 L 5 6 L 5 6 CL 5 6 L This law describes the number of iterations of the seed GMRES method on 15 right hand sides starting from. We assume that this behaviour holds for any starting right hand side. A 5 6 L 6 L Solving large linear systems with multiple right hand sides p. 10/12

108 K M M 6 8 M M 6 K 5 K 8 K 5 M 5 M K M 8 M L M 5 Kernel operations zaxpy zdscal zcopy zdot precond FMM # proc. ML 5 L L L L L Airbus 6L ML 8 L L 6 L L 5 K Airbus L M5 L L L L L Airbus 6 L K L L L L L M6 sphere MCL 8L 6 L L L L sphere 6L 6L MK L K L L L 5 6 sphere K L L L K L L M M sphere 4 8 L L L L L L K 8 cetaf 8 KL L L L L L K 8 6 cobra 8 KL L 6 L L L L K almond Elsapsed time for each basic operation needed by the iterative solvers. Solving large linear systems with multiple right hand sides p. 108/12

109 M M 8 5 MK K L L M K K 8 K K M L L L K L L K 8 M 6 8 M 5 L L M 6 5 M K L L M K M L L L Kernel operations 100 precond FMM # proc. zdot 100 zcopy 100 zdscal 100 zaxpy 8 5L CL L L 8 5L Airbus 6 L 5 6CL L 5 K Airbus 8 6 5CL CL Airbus CL K MCL 5 L M6 sphere 6 L 5 8L 5 8L sphere 5 CL 5 L K L M 5 6 sphere CL 8CL L M M sphere 4 L ML 8 L L ML K 8 cetaf 8 L K ML M L 6 6L 5 CL K 8 6 cobra 8 L K KL 5 L K ML M K almond Percentage of time required by 100 calls to each basic operations with respect to one call to the FMM Solving large linear systems with multiple right hand sides p. 109/12

110 GH / G E E E F F D E The cobra case ACB default guess strat. 2 seed GMRES iter iter iterations elapsed time (s) iter Solving large linear systems with multiple right hand sides p. 110/12

111 The cobra case rien ne sert de courir il faut partir à point Jean de la Fontaine. Solving large linear systems with multiple right hand sides p. 111/12

112 Special feature RHS The st entry of the right hand side corresponding to a plane wave coming from the direction and wave number on the object is $ƒ $ Solving large linear systems with multiple right hand sides p. 112/12

113 : : ; : : :? Seed GMRES SVD preprocess. Linear dependency of RHS Lötstedt and Nilsson ( 02)use initial guess strategy 2.b. : solve the least squares problem? then take the initial Seed GMRES Solving large linear systems with multiple right hand sides p. 113/12

114 Control of the error # SVD # it # verif E E E E E E E E Solving large linear systems with multiple right hand sides p. 114/12

115 6 M6 K M 5 M K K 5 6 M K M M 6 Cost of the SVD preprocessing Size of the problem Elapsed time SVD Elapsed time GMRES # procs Elapsed time to compute the SVD for right hand sides the sphere. For comparison we give the average elapsed time for the solution of one right hand side using full GMRES (assembly of the preconditioner phase not included). The backward error is set to for the GMRES solve. Solving large linear systems with multiple right hand sides p. 115/12

116 E 6 K 5 K 5 5 M M5 K 6 6 K M M K K } } } On the coated cone sphere GMRES with second strategy initial guess seed GMRES seed GMRES SVD( ) block GMRES SVD( ) iterations elapsed time (s) Four competitive methods on the coated cone sphere. Solving large linear systems with multiple right hand sides p. 116/12

117 Prospectives (3/4) Toward an adapted stopping criterion 10 0 bwd error : A ( x n x ref ) / A x ref rcs error : rcs n rcs ref (fmm) / rcs ref (fmm) rcs error : rcs n rcs ref (nofmm) / rcs ref (nofmm) 10 0 bwd error : A ( x n x ref ) / A x ref rcs error : rcs n rcs ref (fmm) / rcs ref (fmm) iterations iterations Airbus 2366 coated cone 604 Solving large linear systems with multiple right hand sides p. 11/12

118 Prospectives (4/4) 10 0 Cetaf 5391 CFIE 1 (0 o 90 o ) symmetric FROB precond GMRES FMM prec 3 SQMR no fmm SQMR FMM prec 1 SQMR FMM prec 2 SQMR FMM prec 3 SQMR FMM prec 1 double SQMR FMM prec 2 double SQMR FMM prec 3 double Solving large linear systems with multiple right hand sides p. 118/12

119 GMRES DR full GMRES GMRES(30) GMRES DR(3020) GMRES DR(2010)SLRU(10) GMRES(10)SLRU(20) iterations Solving large linear systems with multiple right hand sides p. 119/12

120 GMRES DR 10 0 Eigenvalue from Arpack Harmonic Ritz value from GMRESDR(3020) Solving large linear systems with multiple right hand sides p. 120/12

121 GMRES DR 0.2 Eigenvalue from Arpack Harmonic Ritz value from GMRESDR(3020) it 10 Path of the first Harmonic Ritz value Solving large linear systems with multiple right hand sides p. 121/12

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

PhD Thesis. Sparse preconditioners for dense linear systems from electromagnetic applications

PhD Thesis. Sparse preconditioners for dense linear systems from electromagnetic applications N o Ordre: 1879 PhD Thesis Spécialité : Informatique Sparse preconditioners for dense linear systems from electromagnetic applications présentée le 23 Avril 2002 à l Institut National Polytechnique de

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems

A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems Outline A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems Azzam Haidar CERFACS, Toulouse joint work with Luc Giraud (N7-IRIT, France) and Layne Watson (Virginia Polytechnic Institute,

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Efficient parallel iterative solvers for the solution of large dense linear systems arising from the boundary element method in electromagnetism

Efficient parallel iterative solvers for the solution of large dense linear systems arising from the boundary element method in electromagnetism Efficient parallel iterative solvers for the solution of large dense linear systems arising from the boundary element method in electromagnetism G. Alléon EADS-CCR Av. D. Daurat 31 700 Blagnac guillaume.alleon@eads.net

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS Miro Rozložník Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic email: miro@cs.cas.cz joint results with Luc Giraud,

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Additive and multiplicative two-level spectral preconditioning for general linear systems

Additive and multiplicative two-level spectral preconditioning for general linear systems Additive and multiplicative two-level spectral preconditioning for general linear systems B. Carpentieri L. Giraud S. Gratton CERFACS Technical Report TR/PA/04/38 Abstract Multigrid methods are among the

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Numer. Math. (2005) 101: 87 100 DOI 10.1007/s00211-005-0615-4 Numerische Mathematik Luc Giraud Julien Langou Miroslav Rozložník Jasper van den Eshof Rounding error analysis of the classical Gram-Schmidt

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Solving large sparse Ax = b.

Solving large sparse Ax = b. Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The Loss of Orthogonality in the Gram-Schmidt Orthogonalization Process

The Loss of Orthogonality in the Gram-Schmidt Orthogonalization Process ELSEVFX Available online at www.sciencedirect.corn =...c.@o,..~.- An nternational Journal computers & mathematics with applications Computers and Mathematics with Applications 50 (2005) 1069-1075 www.elsevier.com/locate/camwa

More information

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN H. GUTKNECHT Abstract. In this paper we analyze the numerical behavior of several minimum residual methods, which

More information

The in uence of orthogonality on the Arnoldi method

The in uence of orthogonality on the Arnoldi method Linear Algebra and its Applications 309 (2000) 307±323 www.elsevier.com/locate/laa The in uence of orthogonality on the Arnoldi method T. Braconnier *, P. Langlois, J.C. Rioual IREMIA, Universite de la

More information

INSTITUTE for MATHEMATICS. INSTITUTE for MATHEMATICS and SCIENTIFIC COMPUTING

INSTITUTE for MATHEMATICS. INSTITUTE for MATHEMATICS and SCIENTIFIC COMPUTING INSTITUTE for MATHEMATICS (Graz University of Technology) & INSTITUTE for MATHEMATICS and SCIENTIFIC COMPUTING (University of Graz) B. Carpentieri A matrix-free two-grid preconditioner for solving boundary

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

From Direct to Iterative Substructuring: some Parallel Experiences in 2 and 3D

From Direct to Iterative Substructuring: some Parallel Experiences in 2 and 3D From Direct to Iterative Substructuring: some Parallel Experiences in 2 and 3D Luc Giraud N7-IRIT, Toulouse MUMPS Day October 24, 2006, ENS-INRIA, Lyon, France Outline 1 General Framework 2 The direct

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Preconditioned Parallel Block Jacobi SVD Algorithm

Preconditioned Parallel Block Jacobi SVD Algorithm Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic

More information

Dense Matrices for Biofluids Applications

Dense Matrices for Biofluids Applications Dense Matrices for Biofluids Applications by Liwei Chen A Project Report Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Master

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

SSOR PRECONDITIONED INNER-OUTER FLEXIBLE GMRES METHOD FOR MLFMM ANALYSIS OF SCAT- TERING OF OPEN OBJECTS

SSOR PRECONDITIONED INNER-OUTER FLEXIBLE GMRES METHOD FOR MLFMM ANALYSIS OF SCAT- TERING OF OPEN OBJECTS Progress In Electromagnetics Research, PIER 89, 339 357, 29 SSOR PRECONDITIONED INNER-OUTER FLEXIBLE GMRES METHOD FOR MLFMM ANALYSIS OF SCAT- TERING OF OPEN OBJECTS D. Z. Ding, R. S. Chen, and Z. H. Fan

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Simple iteration procedure

Simple iteration procedure Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Error Bounds for Iterative Refinement in Three Precisions

Error Bounds for Iterative Refinement in Three Precisions Error Bounds for Iterative Refinement in Three Precisions Erin C. Carson, New York University Nicholas J. Higham, University of Manchester SIAM Annual Meeting Portland, Oregon July 13, 018 Hardware Support

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

Hybrid Cross Approximation for the Electric Field Integral Equation

Hybrid Cross Approximation for the Electric Field Integral Equation Progress In Electromagnetics Research M, Vol. 75, 79 90, 2018 Hybrid Cross Approximation for the Electric Field Integral Equation Priscillia Daquin, Ronan Perrussel, and Jean-René Poirier * Abstract The

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 30, No. 4, pp. 1483 1499 c 2008 Society for Industrial and Applied Mathematics HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

PRECONDITIONING IN THE PARALLEL BLOCK-JACOBI SVD ALGORITHM

PRECONDITIONING IN THE PARALLEL BLOCK-JACOBI SVD ALGORITHM Proceedings of ALGORITMY 25 pp. 22 211 PRECONDITIONING IN THE PARALLEL BLOCK-JACOBI SVD ALGORITHM GABRIEL OKŠA AND MARIÁN VAJTERŠIC Abstract. One way, how to speed up the computation of the singular value

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor

Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Masami Takata 1, Hiroyuki Ishigami 2, Kini Kimura 2, and Yoshimasa Nakamura 2 1 Academic Group of Information

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Communication-avoiding Krylov subspace methods

Communication-avoiding Krylov subspace methods Motivation Communication-avoiding Krylov subspace methods Mark mhoemmen@cs.berkeley.edu University of California Berkeley EECS MS Numerical Libraries Group visit: 28 April 2008 Overview Motivation Current

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

THE solution to electromagnetic wave interaction with material

THE solution to electromagnetic wave interaction with material IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 52, NO. 9, SEPTEMBER 2004 2277 Sparse Inverse Preconditioning of Multilevel Fast Multipole Algorithm for Hybrid Integral Equations in Electromagnetics

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

A stable variant of Simpler GMRES and GCR

A stable variant of Simpler GMRES and GCR A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G. Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Sparse symmetric preconditioners for dense linear systems in electromagnetism

Sparse symmetric preconditioners for dense linear systems in electromagnetism Sparse symmetric preconditioners for dense linear systems in electromagnetism B. Carpentieri I.S. Duff L. Giraud M. Magolu monga Made CERFACS Technical Report TR/PA//35 Abstract We consider symmetric preconditioning

More information

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 14-1 Nested Krylov methods for shifted linear systems M. Baumann and M. B. van Gizen ISSN 1389-652 Reports of the Delft Institute of Applied Mathematics Delft 214

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017

Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017 Linear Independence Stephen Boyd EE103 Stanford University October 9, 2017 Outline Linear independence Basis Orthonormal vectors Gram-Schmidt algorithm Linear independence 2 Linear dependence set of n-vectors

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Research Article Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations

Research Article Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations Antennas and Propagation Volume 212, Article ID 756259, 9 pages doi:1.1155/212/756259 Research Article Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations Han Guo, Jun

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information