Preconditioned GMRES Revisited

Size: px
Start display at page:

Download "Preconditioned GMRES Revisited"

Transcription

1 Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32

2 Table of Contents 1 GMRES from First Principles (and a Hilbert Space Perspective) 2 How does it Compare With...? 3 Conclusions Preconditioned GMRES Revisited Vancouver 2 / 32

3 What is GMRES? Quoting Saad and Schultz (1986): We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an l 2 -orthogonal basis of Krylov subspaces. Quoting Wikipedia: In mathematics, the generalized minimal residual method (usually abbreviated GMRES) is an iterative method for the numerical solution of a nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to nd this vector. Preconditioned GMRES Revisited Vancouver 2 / 32

4 Some References (Very Incomplete) contributing to the understanding of GMRES in various situations: Starke (1997) Chan, Chow, Saad and Yeung (1998) Chen, Kincaid and Young (1999) Klawonn (1998) Ernst (2000); Eiermann and Ernst (2001) Sarkis and Szyld (2007) Pestana and Wathen (2013) Preconditioned GMRES Revisited Vancouver 3 / 32

5 Some References (Very Incomplete) contributing to the understanding of GMRES in various situations: Starke (1997) Chan, Chow, Saad and Yeung (1998) Chen, Kincaid and Young (1999) Klawonn (1998) Ernst (2000); Eiermann and Ernst (2001) Sarkis and Szyld (2007) Pestana and Wathen (2013) Preconditioned GMRES Revisited Vancouver 3 / 32

6 Some References (Very Incomplete) contributing to the understanding of GMRES in various situations: Starke (1997) Chan, Chow, Saad and Yeung (1998) Chen, Kincaid and Young (1999) Klawonn (1998) Ernst (2000); Eiermann and Ernst (2001) Sarkis and Szyld (2007) Pestana and Wathen (2013) and in Hilbert space (but with A L(X )) in particular: Campbell, Ipsen, Kelley and Meyer (1996) Moret (1997) Calvetti, Lewis and Reichel (2002) Gasparo, Papini and Pasquali (2008) Preconditioned GMRES Revisited Vancouver 3 / 32

7 Purpose/Plan of This Talk re-derive GMRES with natural algorithmic ingredients draw inspirations from a Hilbert space setting (PDE problems) obtain 'canonical GMRES' locate GMRES variants in the literature in this framework to sort out my own personal lack of knowledge/confusion about these variants Hilbert space analysis matters ( recall talk by W. Zulehner for instance) even though in this talk everything is nite dimensional Preconditioned GMRES Revisited Vancouver 4 / 32

8 Problem Setting A x = b in R n A R n n (non-singular) A x = b in X A L(X, X ) (non-singular) X Hilbert space inner product (, ) M Preconditioned GMRES Revisited Vancouver 5 / 32

9 Problem Setting A x = b in R n A R n n (non-singular) R n also a Hilbert space inner product (, ) M (u, v) M = u M v A x = b in X A L(X, X ) (non-singular) X Hilbert space inner product (, ) M Preconditioned GMRES Revisited Vancouver 5 / 32

10 Problem Setting A x = b in R n A R n n (non-singular) R n also a Hilbert space inner product (, ) M (u, v) M = u M v A x = b in X A L(X, X ) (non-singular) X Hilbert space inner product (, ) M (u, v) M = u, M v X,X ( ) M X ; 1 ( M X ; M 1) M M L(X, X ) Riesz isomorphism Preconditioned GMRES Revisited Vancouver 5 / 32

11 Problem Setting A x = b in R n A R n n (non-singular) R n also a Hilbert space inner product (, ) M (u, v) M = u M v ( R n ) M ; 1 ( M R n ; M 1) M A x = b in X A L(X, X ) (non-singular) X Hilbert space inner product (, ) M (u, v) M = u, M v X,X ( ) M X ; 1 ( M X ; M 1) M M L(X, X ) Riesz isomorphism Preconditioned GMRES Revisited Vancouver 5 / 32

12 Problem Setting A x = b in R n A R n n (non-singular) R n also a Hilbert space A x = b in X A L(X, X ) (non-singular) X Hilbert space inner product (, ) M inner product (, ) M (u, v) M = u M v (u, v) M = u, M v X,X ( R n ) M ; 1 ( M R n ; M 1) M The inner product (, ) M in X, the inner product (, ) M 1 in X, the Riesz map M L(X, X ), and its inverse M 1 L(X, X ) ( ) M X ; 1 ( M X ; M 1) M uniquely dene each other. Preconditioned GMRES Revisited Vancouver 5 / 32

13 Problem Setting (primal) quantities in X iterates, errors (dual) quantities in X rhs, residuals A x = b in R n A R n n (non-singular) R n also a Hilbert space A x = b in X A L(X, X ) (non-singular) X Hilbert space inner product (, ) M inner product (, ) M (u, v) M = u M v (u, v) M = u, M v X,X ( R n ) M ; 1 ( M R n ; M 1) M The inner product (, ) M in X, the inner product (, ) M 1 in X, the Riesz map M L(X, X ), and its inverse M 1 L(X, X ) ( ) M X ; 1 ( M X ; M 1) M uniquely dene each other. Preconditioned GMRES Revisited Vancouver 5 / 32

14 Example: Stokes (Saddle-Point Problem) [ ] ( A B B 0 u p ) = V = H 1 0 (Ω; R3 ), Q = L 2 0 (Ω) ( ) f V 0 Q A u = a(u, ) V B u = b(u, ) Q B p = b(, p) V µ u : v dx Ω } {{ } a(u,v) p div v dx Ω }{{} b(v,p) div u q dx Ω }{{} b(u,q) = = 0 F v dx Ω } {{ } f, v Preconditioned GMRES Revisited Vancouver 6 / 32

15 Example: Oseen (Saddle-Point Problem) [ ] ( A+N B B 0 u p ) = V = H 1 0 (Ω; R3 ), Q = L 2 0 (Ω) µ u : v dx Ω } {{ } a(u,v) ( ) f V 0 Q + ( u U) v dx Ω } {{ } n(u,v) A u = a(u, ) V N u = n(u, ) V B u = b(u, ) Q B p = b(, p) V p div v dx Ω }{{} b(v,p) div u q dx Ω }{{} b(u,q) = = 0 F v dx Ω } {{ } f, v Preconditioned GMRES Revisited Vancouver 6 / 32

16 Ingredients of 'Canonical GMRES' The residual is always going to be r := b A x X. 1 Krylov subspaces Due to A L(X, X ) we cannot form K k (A ; r) := span { r, Ar, A 2 r,..., A k 1 r } Preconditioned GMRES Revisited Vancouver 7 / 32

17 Ingredients of 'Canonical GMRES' The residual is always going to be r := b A x X. 1 Krylov subspaces Due to A L(X, X ) we have to use K k (AP 1 ; r) := span { r, AP 1 r, (AP 1 ) 2 r,..., (AP 1 ) k 1 r } X where P L(X, X ) non-singular is a 'preconditioner' (required!). Preconditioned GMRES Revisited Vancouver 7 / 32

18 Ingredients of 'Canonical GMRES' The residual is always going to be r := b A x X. 1 Krylov subspaces Due to A L(X, X ) we have to use K k (AP 1 ; r) := span { r, AP 1 r, (AP 1 ) 2 r,..., (AP 1 ) k 1 r } X where P L(X, X ) non-singular is a 'preconditioner' (required!). 2 Inner product (, ) W in X for Arnoldi Since K k (AP 1 ; r) X holds, orthonormality is dened w.r.t. W 1. Preconditioned GMRES Revisited Vancouver 7 / 32

19 Ingredients of 'Canonical GMRES' The residual is always going to be r := b A x X. 1 Krylov subspaces Due to A L(X, X ) we have to use K k (AP 1 ; r) := span { r, AP 1 r, (AP 1 ) 2 r,..., (AP 1 ) k 1 r } X where P L(X, X ) non-singular is a 'preconditioner' (required!). 2 Inner product (, ) W in X for Arnoldi Since K k (AP 1 ; r) X holds, orthonormality is dened w.r.t. W 1. 3 Inner product (, ) M in X for residual minimization Since r X holds, r M 1 will be the quantity minimized. Preconditioned GMRES Revisited Vancouver 7 / 32

20 The Arnoldi Process The Arnoldi process (here displayed with modied GS) generates an orthonormal basis of the Krylov subspace K k (A ; r 0 ): Input: Matrix A (or matrix-vector products), initial vector r 0 Output: ONB v 1, v 2,... { } with span v 1,..., v k = Kk (A ; r 0 ) r 0 1: Set v 1 := r 0, r 0 1/2 2: for k = 1, 2,... do 3: Set v k+1 := A v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := (v k+1, v j ) // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set h k+1,k := (v k+1, v k+1 ) 1/2 // = (v k+1, v k+1 ) 1/2 9: Set v k+1 := v k+1 h k+1,k 10: end for Preconditioned GMRES Revisited Vancouver 8 / 32

21 The Arnoldi Process The Arnoldi process (here displayed with modied GS) generates a W 1 -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W 1 (or matrix-vector products), initial vector r 0 X Output: W 1 -ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 0 1: Set v 1 := r 0, r 0 1/2 2: for k = 1, 2,... do 3: Set v k+1 := A v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := (v k+1, v j ) // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set h k+1,k := (v k+1, v k+1 ) 1/2 // = (v k+1, v k+1 ) 1/2 9: Set v k+1 := v k+1 h k+1,k 10: end for Preconditioned GMRES Revisited Vancouver 8 / 32

22 The Arnoldi Process The Arnoldi process (here displayed with modied GS) generates a W 1 -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W 1 (or matrix-vector products), initial vector r 0 X Output: W 1 -ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 1: Set z 1 := W 1 0 r 0 and v 1 := r 0, z 1 1/2 and z z 1 1 := r 0, z 1 1/2 2: for k = 1, 2,... do 3: Set v k+1 := A v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := (v k+1, v j ) // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set h k+1,k := (v k+1, v k+1 ) 1/2 // = (v k+1, v k+1 ) 1/2 9: Set v k+1 := v k+1 h k+1,k 10: end for Both v j and z j = W 1 v j need to be stored due to MGS. Preconditioned GMRES Revisited Vancouver 8 / 32

23 The Arnoldi Process The Arnoldi process (here displayed with modied GS) generates a W 1 -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W 1 (or matrix-vector products), initial vector r 0 X Output: W 1 -ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 1: Set z 1 := W 1 0 r 0 and v 1 := r 0, z 1 1/2 and z z 1 1 := r 0, z 1 1/2 2: for k = 1, 2,... do 3: Set v k+1 := AP 1 v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := (v k+1, v j ) // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set h k+1,k := (v k+1, v k+1 ) 1/2 // = (v k+1, v k+1 ) 1/2 9: Set v k+1 := v k+1 h k+1,k 10: end for Both v j and z j = W 1 v j need to be stored due to MGS. Preconditioned GMRES Revisited Vancouver 8 / 32

24 The Arnoldi Process The Arnoldi process (here displayed with modied GS) generates a W 1 -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W 1 (or matrix-vector products), initial vector r 0 X Output: W 1 -ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 1: Set z 1 := W 1 0 r 0 and v 1 := r 0, z 1 1/2 and z z 1 1 := r 0, z 1 1/2 2: for k = 1, 2,... do 3: Set v k+1 := AP 1 v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := v k+1, z j // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set h k+1,k := (v k+1, v k+1 ) 1/2 // = (v k+1, v k+1 ) 1/2 9: Set v k+1 := v k+1 h k+1,k 10: end for Both v j and z j = W 1 v j need to be stored due to MGS. Preconditioned GMRES Revisited Vancouver 8 / 32

25 The Arnoldi Process The Arnoldi process (here displayed with modied GS) generates a W 1 -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W 1 (or matrix-vector products), initial vector r 0 X Output: W 1 -ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 1: Set z 1 := W 1 0 r 0 and v 1 := r 0, z 1 1/2 and z z 1 1 := r 0, z 1 1/2 2: for k = 1, 2,... do 3: Set v k+1 := AP 1 v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := v k+1, z j // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set h k+1,k := (v k+1, v k+1 ) 1/2 // = (v k+1, v k+1 ) 1/2 9: Set v k+1 := v k+1 h k+1,k 10: end for Both v j and z j = W 1 v j need to be stored due to MGS. Preconditioned GMRES Revisited Vancouver 8 / 32

26 The Arnoldi Process The Arnoldi process (here displayed with modied GS) generates a W 1 -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W 1 (or matrix-vector products), initial vector r 0 X Output: W 1 -ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 1: Set z 1 := W 1 0 r 0 and v 1 := r 0, z 1 1/2 and z z 1 1 := r 0, z 1 1/2 2: for k = 1, 2,... do 3: Set v k+1 := AP 1 v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := v k+1, z j // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set z k+1 := W 1 v k+1 and h k+1,k := v k+1, z k+1 1/2 // = (v k+1, v k+1 ) 1/2 W 1 9: Set v k+1 := v k+1 h k+1,k 10: end for Both v j and z j = W 1 v j need to be stored due to MGS. Preconditioned GMRES Revisited Vancouver 8 / 32

27 The Arnoldi Process The Arnoldi process (here displayed with modied GS) generates a W 1 -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W 1 (or matrix-vector products), initial vector r 0 X Output: W 1 -ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 1: Set z 1 := W 1 0 r 0 and v 1 := r 0, z 1 1/2 and z z 1 1 := r 0, z 1 1/2 2: for k = 1, 2,... do 3: Set v k+1 := AP 1 v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := v k+1, z j // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set z k+1 := W 1 v k+1 and h k+1,k := v k+1, z k+1 1/2 // = (v k+1, v k+1 ) 1/2 W 1 9: Set v k+1 := v k+1 and z k+1 := z k+1 h k+1,k h k+1,k 10: end for Both v j and z j = W 1 v j need to be stored due to MGS. Preconditioned GMRES Revisited Vancouver 8 / 32

28 The Arnoldi Process In matrix form the Arnoldi process reads v 2 = AP 1 v 1 h 2,1 v 1 h 1,1 where H is the upper Hessenberg matrix (v 1, AP 1 v 1 ) W 1 (v 1, AP 1 v 2 ) W 1 (v 2, AP 1 v 1 ) W 1 (v 2, AP 1 v 2 ) W 1 H = 0 (v 3, AP 1 v 2 ) W 1 R (k+1) k Preconditioned GMRES Revisited Vancouver 9 / 32

29 The Arnoldi Process In matrix form the Arnoldi process reads 0 v 3 = AP 1 v 1 v 2 v 1 v 2 H 2,2 h 3,2 where H is the upper Hessenberg matrix (v 1, AP 1 v 1 ) W 1 (v 1, AP 1 v 2 ) W 1 (v 2, AP 1 v 1 ) W 1 (v 2, AP 1 v 2 ) W 1 H = 0 (v 3, AP 1 v 2 ) W 1 R (k+1) k Preconditioned GMRES Revisited Vancouver 9 / 32

30 The Arnoldi Process In matrix form the Arnoldi process reads 0 v k+1 = AP 1 v 1 v k v 1 v k H k,k h k+1,k where H is the upper Hessenberg matrix (v 1, AP 1 v 1 ) W 1 (v 1, AP 1 v 2 ) W 1 (v 2, AP 1 v 1 ) W 1 (v 2, AP 1 v 2 ) W 1 H = 0 (v 3, AP 1 v 2 ) W 1 R (k+1) k Preconditioned GMRES Revisited Vancouver 9 / 32

31 The Arnoldi Process In matrix form the Arnoldi process reads 0 v k+1 = AP 1 v 1 v k v 1 v k H k,k h k+1,k or V k+1 H = AP 1 V k where H is the upper Hessenberg matrix (v 1, AP 1 v 1 ) W 1 (v 1, AP 1 v 2 ) W 1 (v 2, AP 1 v 1 ) W 1 (v 2, AP 1 v 2 ) W 1 H = 0 (v 3, AP 1 v 2 ) W 1 R (k+1) k Preconditioned GMRES Revisited Vancouver 9 / 32

32 Residual Norm Minimization By design, x k x 0 P 1 K k (AP 1 ; r 0 ) and therefore r k r 0 AP 1 K k (AP 1 ; r 0 ) = range AP 1 V k. Hence r k 2 M 1 = r 0 AP 1 V k y 2 y R k M 1 = r 0 V k+1 H y 2 due to AP 1 V M 1 k = V k+1 H = r 0 W 1 v 1 V k+1 H y 2 since v M 1 1 = r 0 / r 0 W 1 = ( V k+1 r0 W 1 e 1 H y ) 2 M 1 Preconditioned GMRES Revisited Vancouver 10 / 32

33 Residual Norm Minimization By design, x k x 0 P 1 K k (AP 1 ; r 0 ) and therefore r k r 0 AP 1 K k (AP 1 ; r 0 ) = range AP 1 V k. Hence r k 2 M 1 = r 0 AP 1 V k y 2 y R k M 1 = r 0 V k+1 H y 2 due to AP 1 V M 1 k = V k+1 H = r 0 W 1 v 1 V k+1 H y 2 since v M 1 1 = r 0 / r 0 W 1 = ( V k+1 r0 W 1 e 1 H y ) 2 M 1 The Arnoldi-inner product W 1 is mathematically irrelevant and can be chosen for algorithmic convenience. Here we are led to choose W := M, which from now on we assume. Preconditioned GMRES Revisited Vancouver 10 / 32

34 Residual Norm Minimization Hence By design, x k x 0 P 1 K k (AP 1 ; r 0 ) and therefore r k r 0 AP 1 K k (AP 1 ; r 0 ) = range AP 1 V k. r k 2 M 1 = r 0 AP 1 V k y 2 M 1 y R k = r 0 V k+1 H y 2 M 1 due to AP 1 V k = V k+1 H = r 0 M 1 v 1 V k+1 H y 2 M 1 since v 1 = r 0 / r 0 M 1 = V k+1 ( r0 M 1 e 1 H y ) 2 M 1 = r 0 M 1 e 1 H y 2 2 by M 1 -orthonormality. The Arnoldi-inner product W 1 is mathematically irrelevant and can be chosen for algorithmic convenience. Here we are led to choose W := M, which from now on we assume. Preconditioned GMRES Revisited Vancouver 10 / 32

35 Short Recurrences Short recurrences occur when (v 1, AP 1 v 1 ) M 1 (v 1, AP 1 v 2 ) M 1 (v 2, AP 1 v 1 ) M 1 (v 2, AP 1 v 2 ) M 1 H k,k = 0 (v 3, AP 1 v 2 ) M 1 R k k. 0.. is symmetric, i.e., when M 1 AP 1 = P A M 1. (SRC) This means that AP 1 L(X ) is ( X, M 1) -Hilbert space self-adjoint. Preconditioned GMRES Revisited Vancouver 11 / 32

36 Short Recurrences Short recurrences occur when (v 1, AP 1 v 1 ) M 1 (v 1, AP 1 v 2 ) M 1 (v 2, AP 1 v 1 ) M 1 (v 2, AP 1 v 2 ) M 1 H k,k = 0 (v 3, AP 1 v 2 ) M 1 R k k. 0.. is symmetric, i.e., when M 1 AP 1 = P A M 1. (SRC) This means that AP 1 L(X ) is ( X, M 1) -Hilbert space self-adjoint. Should we call the method MINRES then? Preconditioned GMRES Revisited Vancouver 11 / 32

37 (SRC) and the Normal Equations We are going to refer to GMRES as just described by Recall GMRES ( A, P 1, M 1, b, x 0 ). M 1 AP 1 = P A M 1. (SRC) Given A and M, (SRC) always holds for the choice P 1 := H 1 A M 1, where H = H is non-singular. Preconditioned GMRES Revisited Vancouver 12 / 32

38 (SRC) and the Normal Equations We are going to refer to GMRES as just described by Recall GMRES ( A, P 1, M 1, b, x 0 ). M 1 AP 1 = P A M 1. (SRC) Given A and M, (SRC) always holds for the choice P 1 := H 1 A M 1, where H = H is non-singular. This is mathematically equivalent to running GMRES on the normal equations (modied problem) A M 1 Ax = A M 1 b with preconditioner H. In other words, GMRES ( A, H 1 A M 1, M 1, b, x 0 ) GMRES ( A M 1 A, H 1, A 1 MA, A M 1 b, x 0 ) Preconditioned GMRES Revisited Vancouver 12 / 32

39 (SRC) and the Normal Equations We are going to refer to GMRES as just described by Recall GMRES ( A, P 1, M 1, b, x 0 ). M 1 AP 1 = P A M 1. (SRC) Given A and M, (SRC) always holds for the choice P 1 := H 1 A M 1, where H = H is spd. This is mathematically equivalent to running GMRES on the normal equations (modied problem) A M 1 Ax = A M 1 b with preconditioner H. In other words, GMRES ( A, H 1 A M 1, M 1, b, x 0 ) GMRES ( A M 1 A, H 1, A 1 MA, A M 1 b, x 0 ) CG ( A M 1 A, H 1, A M 1 b, x 0 ). Preconditioned GMRES Revisited Vancouver 12 / 32

40 Convergence Analysis In the absence of any particular properties of (A, P, M) we can use the Jordan decomposition (with U, J C n n ) Then for any polynomial p, M 1/2 AP 1 M 1/2 = U J U 1. p(ap 1 ) = M 1/2 U p(j) U 1 M 1/2 holds and thus p(ap 1 ) r 0 2 M 1 = U p(j) U 1 M 1/2 r σ 2 max( U p(j) U 1 ) r 0 2 M 1 = λ max ( p(ap 1 ) M 1 p(ap 1 ); M 1) r 0 2 M 1. Preconditioned GMRES Revisited Vancouver 13 / 32

41 Convergence Analysis In the absence of any particular properties of (A, P, M) we can use the Jordan decomposition (with U, J C n n ) Then for any polynomial p, M 1/2 AP 1 M 1/2 = U J U 1. p(ap 1 ) = M 1/2 U p(j) U 1 M 1/2 holds and thus p(ap 1 ) r 0 2 M 1 = U p(j) U 1 M 1/2 r σ 2 max( U p(j) U 1 ) r 0 2 M 1 = λ max ( p(ap 1 ) M 1 p(ap 1 ); M 1) r 0 2 M 1. Alternatively, using the Schur decomposition M 1/2 AP 1 M 1/2 = U R U H (with U, R C n n ) we arrive at p(ap 1 ) r 0 2 M 1 σ 2 max( p(r) ) r0 2 M 1 = λ max ( p(ap 1 ) M 1 p(ap 1 ); M 1) r 0 2 M 1. Preconditioned GMRES Revisited Vancouver 13 / 32

42 Convergence Analysis In the absence of any particular properties of (A, P, M) we can use the Jordan decomposition (with U, J C n n ) Then for any polynomial p, M 1/2 AP 1 M 1/2 = U J U 1. p(ap 1 ) = M 1/2 U p(j) U 1 M 1/2 holds and thus p(ap 1 ) r 0 2 M 1 = U p(j) U 1 M 1/2 r σ 2 max( U p(j) U 1 ) r 0 2 M 1 = λ max ( p(ap 1 ) M 1 p(ap 1 ); M 1) r 0 2 M 1. This convergence estimate depends on the complete triplet (A, P, M). Preconditioned GMRES Revisited Vancouver 13 / 32

43 Normality Condition For AP 1 L(X ), (AP 1 ) L(X ) is its adjoint and (AP 1 ) = M (AP 1 ) M 1 is its Hilbert space adjoint w.r.t. ( X, M 1). Preconditioned GMRES Revisited Vancouver 14 / 32

44 Normality Condition For AP 1 L(X ), (AP 1 ) L(X ) is its adjoint and (AP 1 ) = M (AP 1 ) M 1 is its Hilbert space adjoint w.r.t. ( X, M 1). We have already encountered the (Hilbert space) self-adjointness condition AP 1 = (AP 1 ), responsible for short recursions: M 1 AP 1 = P A M 1. (SRC) Preconditioned GMRES Revisited Vancouver 14 / 32

45 Normality Condition For AP 1 L(X ), (AP 1 ) L(X ) is its adjoint and (AP 1 ) = M (AP 1 ) M 1 is its Hilbert space adjoint w.r.t. ( X, M 1). We have already encountered the (Hilbert space) self-adjointness condition AP 1 = (AP 1 ), responsible for short recursions: M 1 AP 1 = P A M 1. (SRC) The normality condition AP 1 (AP 1 ) = (AP 1 ) AP 1 reads AP 1 M(AP 1 ) M 1 = M (AP 1 ) M 1 AP 1 (NC) or B B = B B with B = M 1/2 AP 1 M 1/2. Preconditioned GMRES Revisited Vancouver 14 / 32

46 Convergence Analysis Under Normality Suppose that (NC) holds. Then with U, D C n n unitary/diagonal, B = M 1/2 AP 1 M 1/2 = U D U H and for any polynomial p, p(ap 1 ) = M 1/2 U p(d) U H M 1/2 holds and thus p(ap 1 ) r 0 2 M 1 = U p(d) U H M 1/2 r σ 2 max( U p(d) U H ) r 0 2 M 1 = [ max p(λ j ) ] 2 r0 2 j M 1, where λ j are the (complex; real under the (SRC)) eigenvalues of AP 1. Preconditioned GMRES Revisited Vancouver 15 / 32

47 Convergence Analysis Under Normality Suppose that (NC) holds. Then with U, D C n n unitary/diagonal, B = M 1/2 AP 1 M 1/2 = U D U H and for any polynomial p, p(ap 1 ) = M 1/2 U p(d) U H M 1/2 holds and thus p(ap 1 ) r 0 2 M 1 = U p(d) U H M 1/2 r σ 2 max( U p(d) U H ) r 0 2 M 1 = [ max p(λ j ) ] 2 r0 2 j M 1, where λ j are the (complex; real under the (SRC)) eigenvalues of AP 1. The resulting convergence estimate depends only on (A, P) and it is independent of the norm M 1 we choose for residual minimization. Preconditioned GMRES Revisited Vancouver 15 / 32

48 Intermediate Summary We have re-derived 'canonical' GMRES from 'rst principles' and from a Hilbert space perspective, carefully distinguishing mapping properties and primal and dual quantities. Two choices arise (for the user): 1 preconditioner P L(X, X ) 2 inner product (, ) M 1 in X for residual minimization 3 inner product (, ) W 1 in X for Arnoldi (non-essential) For the latter we x wlog W := M for algorithmic convenience. One application of A, P 1 and M 1 each is required per iteration. (SRC) (NC) convergence est. in terms of spectrum of AP 1 Preconditioned GMRES Revisited Vancouver 16 / 32

49 Intermediate Summary We have re-derived 'canonical' GMRES from 'rst principles' and from a Hilbert space perspective, carefully distinguishing mapping properties and primal and dual quantities. Two choices arise (for the user): 1 preconditioner P L(X, X ) 2 inner product (, ) M 1 in X for residual minimization 3 inner product (, ) W 1 in X for Arnoldi (non-essential) For the latter we x wlog W := M for algorithmic convenience. One application of A, P 1 and M 1 each is required per iteration. (SRC) (NC) convergence est. in terms of spectrum of AP 1 Not discussed: Solution of least-squares problems (by updated QR factorization). Under particular choices of (A, P, M) one may take advantage of additional structure to streamline the implementation. [ see talk by D. Szyld for an example] Preconditioned GMRES Revisited Vancouver 16 / 32

50 Table of Contents 1 GMRES from First Principles (and a Hilbert Space Perspective) 2 How does it Compare With...? 3 Conclusions Preconditioned GMRES Revisited Vancouver 17 / 32

51 Dual vs. Primal Krylov Subspaces 'Canonical GMRES' is working with the (dual, residual-based) Krylov subspaces K k (AP 1 ; r 0 ) X generated by AP 1 L(X ). It turned out convenient to x wlog W := M. Preconditioned GMRES Revisited Vancouver 17 / 32

52 Dual vs. Primal Krylov Subspaces 'Canonical GMRES' is working with the (dual, residual-based) Krylov subspaces K k (AP 1 ; r 0 ) X generated by AP 1 L(X ). It turned out convenient to x wlog W := M. Equivalently, the algorithm can be written using the (primal) Krylov subspaces (see next two slides) K k (P 1 A; P 1 r 0 ) X. Preconditioned GMRES Revisited Vancouver 17 / 32

53 Dual vs. Primal Krylov Subspaces 'Canonical GMRES' is working with the (dual, residual-based) Krylov subspaces K k (AP 1 ; r 0 ) X generated by AP 1 L(X ). It turned out convenient to x wlog W := M. Equivalently, the algorithm can be written using the (primal) Krylov subspaces (see next two slides) P 1 K k (AP 1 ; r 0 ) = K k (P 1 A; P 1 r 0 ) X. Preconditioned GMRES Revisited Vancouver 17 / 32

54 Dual vs. Primal Krylov Subspaces 'Canonical GMRES' is working with the (dual, residual-based) Krylov subspaces K k (AP 1 ; r 0 ) X generated by AP 1 L(X ). It turned out convenient to x wlog W := M. Equivalently, the algorithm can be written using the (primal) Krylov subspaces (see next two slides) P 1 K k (AP 1 ; r 0 ) = K k (P 1 A; P 1 r 0 ) X. 'Equivalently' means same iterates x k, same residuals r k = b Ax k, same residual norms r k M 1. Preconditioned GMRES Revisited Vancouver 17 / 32

55 Dual vs. Primal Krylov Subspaces 'Canonical GMRES' is working with the (dual, residual-based) Krylov subspaces K k (AP 1 ; r 0 ) X generated by AP 1 L(X ). It turned out convenient to x wlog W := M. Equivalently, the algorithm can be written using the (primal) Krylov subspaces (see next two slides) P 1 K k (AP 1 ; r 0 ) = K k (P 1 A; P 1 r 0 ) X. 'Equivalently' means same iterates x k, same residuals r k = b Ax k, same residual norms r k M 1. Caution: Do not confuse with 'right/left preconditioning'. There is no such distinction in 'canonical GMRES'. Preconditioned GMRES Revisited Vancouver 17 / 32

56 Using Primal Krylov Subspaces The Arnoldi process generates a basis {U k } of K k (P 1 A; P 1 r 0 ) X with is orthonormal w.r.t. the inner product W (mathematically irrelevant). In matrix form the Arnoldi process reads P 1 AU k = U k+1 H, where H is the upper Hessenberg matrix (v 1, P 1 Av 1 ) W (v 1, P 1 Av 2 ) W (v 2, P 1 Av 1 ) W (v 2, P 1 Av 2 ) W H = 0 (v 3, P 1 Av 2 ) W R (k+1) k The short recursion condition becomes WP 1 A = (WP 1 A), which appears to be dierent from (SRC)... Preconditioned GMRES Revisited Vancouver 18 / 32

57 Using Primal Krylov Subspaces We still want to minimize r k 2 M 1 = r 0 AU k y 2 y R k M 1 = P 1 r 0 U k+1 H y 2 P 1 AU P M 1 P k = U k+1 H = P 1 r 0 W u 1 U k+1 H y 2 u P M 1 P 1 = P 1 r 0 / P 1 r 0 W = ( U k+1 P 1 r 0 W e 1 H y ) 2 P M 1 P Preconditioned GMRES Revisited Vancouver 19 / 32

58 Using Primal Krylov Subspaces We still want to minimize r k 2 M 1 = r 0 AU k y 2 y R k M 1 = P 1 r 0 U k+1 H y 2 P 1 AU P M 1 P k = U k+1 H = P 1 r 0 W u 1 U k+1 H y 2 u P M 1 P 1 = P 1 r 0 / P 1 r 0 W = ( U k+1 P 1 r 0 W e 1 H y ) 2 P M 1 P The Arnoldi-inner product W is mathematically irrelevant and can be chosen for algorithmic convenience. Here we are led to choose W := P M 1 P L(X, X ). Preconditioned GMRES Revisited Vancouver 19 / 32

59 Using Primal Krylov Subspaces We still want to minimize r k 2 M 1 = r 0 AU k y 2 M 1 y R k = P 1 r 0 U k+1 H y 2 P M 1 P P 1 AU k = U k+1 H = P 1 r 0 W u 1 U k+1 H y 2 P M 1 P u 1 = P 1 r 0 / P 1 r 0 W = U k+1 ( P 1 r 0 W e 1 H y ) 2 P M 1 P = r 0 M 1 e 1 H y 2 2 by W -orthonormality. The Arnoldi-inner product W is mathematically irrelevant and can be chosen for algorithmic convenience. Here we are led to choose W := P M 1 P L(X, X ). Preconditioned GMRES Revisited Vancouver 19 / 32

60 Using Primal Krylov Subspaces We still want to minimize r k 2 M 1 = r 0 AU k y 2 M 1 y R k = P 1 r 0 U k+1 H y 2 P M 1 P P 1 AU k = U k+1 H = P 1 r 0 W u 1 U k+1 H y 2 P M 1 P u 1 = P 1 r 0 / P 1 r 0 W = U k+1 ( P 1 r 0 W e 1 H y ) 2 P M 1 P = r 0 M 1 e 1 H y 2 2 by W -orthonormality. The Arnoldi-inner product W is mathematically irrelevant and can be chosen for algorithmic convenience. Here we are led to choose W := P M 1 P L(X, X ). But then the implementations with primal/dual Krylov subspaces agree: u j = P 1 v j holds, the Hessenberg matrices are the same, and the short recursion conditions agree. So we continue with the original version, using dual Krylov subspaces. Preconditioned GMRES Revisited Vancouver 19 / 32

61 What if A L(X )? The residual r := b A x belongs to X now. 1 Krylov subspaces With A L(X ) we can form K k (A ; r) := span { r, Ar, A 2 r,..., A k 1 r } X Preconditioned GMRES Revisited Vancouver 20 / 32

62 What if A L(X )? The residual r := b A x belongs to X now. 1 Krylov subspaces With A L(X ) we may still want to use a preconditioner K k (AP 1 ; r) := span { r, AP 1 r, (AP 1 ) 2 r,..., (AP 1 ) k 1 r } X but P L(X ) is optional now. Preconditioned GMRES Revisited Vancouver 20 / 32

63 What if A L(X )? The residual r := b A x belongs to X now. 1 Krylov subspaces With A L(X ) we may still want to use a preconditioner K k (AP 1 ; r) := span { r, AP 1 r, (AP 1 ) 2 r,..., (AP 1 ) k 1 r } X but P L(X ) is optional now. 2 Inner product (, ) W in X for Arnoldi Since K k (AP 1 ; r) X holds, orthonormality is dened w.r.t. W. Preconditioned GMRES Revisited Vancouver 20 / 32

64 What if A L(X )? The residual r := b A x belongs to X now. 1 Krylov subspaces With A L(X ) we may still want to use a preconditioner K k (AP 1 ; r) := span { r, AP 1 r, (AP 1 ) 2 r,..., (AP 1 ) k 1 r } X but P L(X ) is optional now. 2 Inner product (, ) W in X for Arnoldi Since K k (AP 1 ; r) X holds, orthonormality is dened w.r.t. W. 3 Inner product (, ) M in X for residual minimization Since r X holds, r M will be the quantity minimized. Preconditioned GMRES Revisited Vancouver 20 / 32

65 The Arnoldi Process for A L(X, X ) The Arnoldi process (here displayed with modied GS) generates a W 1 -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W 1 (or matrix-vector products), initial vector r 0 X Output: W 1 -ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 1: Set z 1 := W 1 0 r 0 and v 1 := r 0, z 1 1/2 and z z 1 1 := r 0, z 1 1/2 2: for k = 1, 2,... do 3: Set v k+1 := AP 1 v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := v k+1, z j // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set z k+1 := W 1 v k+1 and h k+1,k := v k+1, z k+1 1/2 // = (v k+1, v k+1 ) 1/2 W 1 9: Set v k+1 := v k+1 and z k+1 := z k+1 h k+1,k h k+1,k 10: end for Both v j and z j = W 1 v j need to be stored due to MGS. Preconditioned GMRES Revisited Vancouver 21 / 32

66 The Arnoldi Process for A L(X ) The Arnoldi process (here displayed with modied GS) generates a W -orthonormal basis of the Krylov subspace K k (AP 1 ; r 0 ): Input: Matrix A, P 1, W (or matrix-vector products), initial vector r 0 X Output: W - ONB v 1, v 2,... X { } with span v 1,..., v k = Kk (AP 1 ; r 0 ) r 0 1: Set z 1 := W r 0 and v 1 := r 0, z 1 1/2 and z z 1 1 := r 0, z 1 1/2 2: for k = 1, 2,... do 3: Set v k+1 := AP 1 v k // new Krylov vector 4: for j = 1, 2,..., k do 5: Set h j,k := v k+1, z j // orthogonal. coecients 6: Set v k+1 := v k+1 h j,k v j 7: end for 8: Set z k+1 := W v k+1 and h k+1,k := v k+1, z k+1 1/2 // = (v k+1, v k+1 ) 1/2 W 9: Set v k+1 := v k+1 and z k+1 := z k+1 h k+1,k h k+1,k 10: end for Both v j and z j = W v j need to be stored due to MGS. Preconditioned GMRES Revisited Vancouver 21 / 32

67 Residual Minimization for A L(X, X ) By design, x k x 0 P 1 K k (AP 1 ; r 0 ) and therefore r k r 0 AP 1 K k (AP 1 ; r 0 ) = range AP 1 V k. r k 2 M 1 = r 0 AP 1 V k y 2 y R k M 1 = r 0 V k+1 H y 2 due to AP 1 V M 1 k = V k+1 H = r 0 M 1 v 1 V k+1 H y 2 since v M 1 1 = r 0 / r 0 M 1 = ( Vk+1 r0 M 1 e 1 H y ) 2 M 1 = r 0 M 1 e 1 H y 2 by M 1 -orthonormality. 2 Preconditioned GMRES Revisited Vancouver 22 / 32

68 Residual Minimization for A L(X ) By design, x k x 0 P 1 K k (AP 1 ; r 0 ) and therefore r k r 0 AP 1 K k (AP 1 ; r 0 ) = range AP 1 V k. r k 2 M = r 0 AP 1 V k y 2 M y R k = r 0 V k+1 H y 2 M due to AP 1 V k = V k+1 H = r 0 M v 1 V k+1 H y 2 M since v 1 = r 0 / r 0 M = Vk+1 ( r0 M e 1 H y ) 2 M = r 0 M e 1 H y 2 2 by M -orthonormality. The Arnoldi-inner product W is mathematically irrelevant and can be chosen for algorithmic convenience. Here we are led again to choose W := M, which from now on we assume. Preconditioned GMRES Revisited Vancouver 22 / 32

69 Short Rec./Norm. Conditions for A L(X, X ) Self-adjointness AP 1 = (AP 1 ) in ( X ; M 1) is responsible for short recursions: M 1 AP 1 = P A M 1. (SRC) The normality condition AP 1 (AP 1 ) = (AP 1 ) AP 1 reads AP 1 M (AP 1 ) M 1 = M (AP 1 ) M 1 AP 1. (NC) Preconditioned GMRES Revisited Vancouver 23 / 32

70 Short Rec./Norm. Conditions for A L(X ) Self-adjointness AP 1 = (AP 1 ) in ( ) X ; M is responsible for short recursions: M AP 1 = P A M. (SRC) The normality condition AP 1 (AP 1 ) = (AP 1 ) AP 1 reads AP 1 M 1 (AP 1 ) M = M 1 (AP 1 ) M AP 1. (NC) The convergence analysis carries over, mutatis mutandis. Preconditioned GMRES Revisited Vancouver 23 / 32

71 Overview of Canonical GMRES A L(X, X ) preconditioner P L(X, X ) inner product M L(X, X ) Krylov subspaces K k (AP 1 ; r 0 ) X Arnoldi orthonormality w.r.t. M 1, apply M 1 A L(X ) preconditioner P L(X ) inner product M L(X, X ) Krylov subspaces K k (AP 1 ; r 0 ) X Arnoldi orthonormality w.r.t. M, apply M residual minimization r M 1 residual minimization r M GMRES ( A, P 1, M 1, b, x 0 ). GMRES ( A, P 1, M, b, x 0 ). Preconditioned GMRES Revisited Vancouver 24 / 32

72 GMRES in the Literature I The very frequent 'right preconditioned' GMRES is motivated by considering the modied problem AP 1 u = b. The Arnoldi process for K k (AP 1 ; r 0 ) is typically carried out w.r.t. the Euclidean inner product and r 2 = b Ax 2 is minimized. A non-singular, P non-singular [see for instance (Saad, 2003, Ch. 9.3), Matlab, PETSc,... ] Preconditioned GMRES Revisited Vancouver 25 / 32

73 GMRES in the Literature I The very frequent 'right preconditioned' GMRES is motivated by considering the modied problem AP 1 u = b. The Arnoldi process for K k (AP 1 ; r 0 ) is typically carried out w.r.t. the Euclidean inner product and r 2 = b Ax 2 is minimized. This corresponds to canonical GMRES with A non-singular, P non-singular, W = M = id. [see for instance (Saad, 2003, Ch. 9.3), Matlab, PETSc,... ] Preconditioned GMRES Revisited Vancouver 25 / 32

74 GMRES in the Literature I The very frequent 'right preconditioned' GMRES is motivated by considering the modied problem AP 1 u = b. The Arnoldi process for K k (AP 1 ; r 0 ) is typically carried out w.r.t. the Euclidean inner product and r 2 = b Ax 2 is minimized. This corresponds to canonical GMRES with A non-singular, P non-singular, W = M = id. U The user does not need to know whether A L(X, X ) or A L(X ) since M = M 1 = id. (This also saves one operation per iteration.) D The user cannot control which norm of the residual is being minimized. This appears to be a restriction, which cannot be compensated for by the preconditioner. Opportunities to satisfy (SRC) and (NC) are limited. [see for instance (Saad, 2003, Ch. 9.3), Matlab, PETSc,... ] Preconditioned GMRES Revisited Vancouver 25 / 32

75 GMRES in the Literature II A small number of papers consider Ax = b with A L(X, X ) where the preconditioner P := M (spd) is chosen. The Arnoldi process for K k (AP 1 ; r 0 ) is carried out w.r.t. the inner product W 1 = M 1. [see for instance (Starke, 1997, Sect. 3), (Chan et al., 1998, Alg. 2.3), (Ernst, 2000, Ch. 9.3),... ] Preconditioned GMRES Revisited Vancouver 26 / 32

76 GMRES in the Literature II A small number of papers consider Ax = b with A L(X, X ) where the preconditioner P := M (spd) is chosen. The Arnoldi process for K k (AP 1 ; r 0 ) is carried out w.r.t. the inner product W 1 = M 1. This corresponds to canonical GMRES with A non-singular, P = M, W = M. [see for instance (Starke, 1997, Sect. 3), (Chan et al., 1998, Alg. 2.3), (Ernst, 2000, Ch. 9.3),... ] Preconditioned GMRES Revisited Vancouver 26 / 32

77 GMRES in the Literature II A small number of papers consider Ax = b with A L(X, X ) where the preconditioner P := M (spd) is chosen. The Arnoldi process for K k (AP 1 ; r 0 ) is carried out w.r.t. the inner product W 1 = M 1. This corresponds to canonical GMRES with A non-singular, P = M, W = M. U P = M saves one operation per iteration. D Coupling the choice of residual norm and preconditioner appears to be a restriction compared to full 'canonical GMRES'. Opportunities to satisfy (SRC) which amounts to A = A and (NC) are limited. [see for instance (Starke, 1997, Sect. 3), (Chan et al., 1998, Alg. 2.3), (Ernst, 2000, Ch. 9.3),... ] Preconditioned GMRES Revisited Vancouver 26 / 32

78 GMRES in the Literature III Bramble & Pasciak derived a method (BPCG)here adjusted to t our settingfor self-adjoint, indenite problems with A L(X, X ) in saddle-point form [ A1 B A = ] B A 2 with A 1 0 (spd), A 2 0 (spsd). [Bramble and Pasciak (1988),... ] Preconditioned GMRES Revisited Vancouver 27 / 32

79 GMRES in the Literature III Bramble & Pasciak derived a method (BPCG)here adjusted to t our settingfor self-adjoint, indenite problems with A L(X, X ) in saddle-point form with a particular preconditioner P and inner product M: [ A1 B A = ] ] [Â1 B, P =, M = B A 2 0 id [ ] A1 Â1 0 0 id with A 1 0 (spd), A 2 0 (spsd). [Bramble and Pasciak (1988),... ] Preconditioned GMRES Revisited Vancouver 27 / 32

80 GMRES in the Literature III Bramble & Pasciak derived a method (BPCG)here adjusted to t our settingfor self-adjoint, indenite problems with A L(X, X ) in saddle-point form with a particular preconditioner P and inner product M: [ A1 B A = ] ] [Â1 B, P =, M = B A 2 0 id [ ] A1 Â1 0 0 id with A 1 0 (spd), A 2 0 (spsd). The (SRC) holds by construction. Moreover, under appropriate assumptions, the eigenvalues of AP 1 are not only real but positive. U One obtains a CG-like convergence result. D r M 1 = e AM 1 A is non-trivial to interpret. [Bramble and Pasciak (1988),... ] Preconditioned GMRES Revisited Vancouver 27 / 32

81 GMRES in the Literature IV Klawonn extended the analysis of Bramble & Pasciak and employed canonical GMRES for self-adjoint, indenite problems with A L(X, X ) in saddle-point form [ A1 B A = ] B t 2 A 2 with A 1, A 2 0 (spd). [see for instance Klawonn (1998) and also Simoncini (2004),... ] Preconditioned GMRES Revisited Vancouver 28 / 32

82 GMRES in the Literature IV Klawonn extended the analysis of Bramble & Pasciak and employed canonical GMRES for self-adjoint, indenite problems with A L(X, X ) in saddle-point form with a particular preconditioner P and inner product M: [ A1 B A = ] ] [Â1 B B t 2, P =, M = A 2 0 Â2 [ ] A 1 Â1 0 0 Â 2 with A 1, A 2 0 (spd). [see for instance Klawonn (1998) and also Simoncini (2004),... ] Preconditioned GMRES Revisited Vancouver 28 / 32

83 GMRES in the Literature IV Klawonn extended the analysis of Bramble & Pasciak and employed canonical GMRES for self-adjoint, indenite problems with A L(X, X ) in saddle-point form with a particular preconditioner P and inner product M: [ A1 B A = ] ] [Â1 B B t 2, P =, M = A 2 0 Â2 [ ] A 1 Â1 0 0 Â 2 with A 1, A 2 0 (spd). The (SRC) holds (although it does not seem to have been used in the numerical experiments). Moreover, under appropriate assumptions, the eigenvalues of AP 1 are not only real but positive. [see for instance Klawonn (1998) and also Simoncini (2004),... ] Preconditioned GMRES Revisited Vancouver 28 / 32

84 GMRES in the Literature IV Klawonn extended the analysis of Bramble & Pasciak and employed canonical GMRES for self-adjoint, indenite problems with A L(X, X ) in saddle-point form with a particular preconditioner P and inner product M: [ A1 B A = ] ] [Â1 B B t 2, P =, M = A 2 0 Â2 [ ] A 1 Â1 0 0 Â 2 with A 1, A 2 0 (spd). The (SRC) holds (although it does not seem to have been used in the numerical experiments). Moreover, under appropriate assumptions, the eigenvalues of AP 1 are not only real but positive. U One obtains a CG-like convergence result, robustly in t R. D r M 1 = e AM 1 A is non-trivial to interpret. [see for instance Klawonn (1998) and also Simoncini (2004),... ] Preconditioned GMRES Revisited Vancouver 28 / 32

85 GMRES in the Literature V The term 'preconditioned MINRES' seems to be reserved for the setting A = A, P spd, M = P. Certainly the (SRC) holds in this setting: M 1 AP 1 = P A M 1 (SRC) but it is also valid in many other situations with or without A = A. [Elman et al. (2014); Günnel et al. (2014)] Preconditioned GMRES Revisited Vancouver 29 / 32

86 GMRES in the Literature VI The very frequent 'left preconditioned' GMRES is motivated by considering the modied problem P 1 L Ax = P 1 L b. The Arnoldi process for K k (P 1 L A; P 1 L r 0) is typically carried out w.r.t. the Euclidean inner product and P 1 L r 2 = b Ax P is minimized. L P 1 L This cannot be modeled in canonical GMRES except as a modied problem: P 1 L A non-singular, P = id, W = M = id. [see for instance (Saad, 2003, Ch. 9.3), Matlab, PETSc,... ] Preconditioned GMRES Revisited Vancouver 30 / 32

87 GMRES in the Literature VI The very frequent 'left preconditioned' GMRES is motivated by considering the modied problem P 1 L Ax = P 1 L b. The Arnoldi process for K k (P 1 L A; P 1 L r 0) is typically carried out w.r.t. the Euclidean inner product and P 1 L r 2 = b Ax P is minimized. L P 1 L This cannot be modeled in canonical GMRES except as a modied problem: P 1 L A non-singular, P = id, W = M = id. U The user does not need to know whether A, P belong to L(X, X ) or L(X ) since the choice P = P 1 = id and M = M 1 = id holds. Moreover, this choice reduces the cost per iteration. D The user may not be able to factorize the desired residual metric. Not using P nor M appears to be a restriction, which cannot be compensated for by P L. Opportunities for (SRC) and (NC) are limited. [see for instance (Saad, 2003, Ch. 9.3), Matlab, PETSc,... ] Preconditioned GMRES Revisited Vancouver 30 / 32

88 GMRES in the Literature VII Pestana & Wathen also consider a 'left preconditioned' GMRES P 1 L Ax = P 1 L b. where A L(X ). Both A and the 'left preconditioner' P L L(X ) are assumed to be ( X, H ) -Hilbert space self-adjoint w.r.t. a reference inner product H L(X, X ): HA = A H and HP L = P L H. This corresponds to canonical GMRES with P 1 L A non-singular, P = id [(Pestana and Wathen, 2013, preprint version)] Preconditioned GMRES Revisited Vancouver 31 / 32

89 GMRES in the Literature VII Pestana & Wathen also consider a 'left preconditioned' GMRES P 1 L Ax = P 1 L b. where A L(X ). Both A and the 'left preconditioner' P L L(X ) are assumed to be ( ) X, H -Hilbert space self-adjoint w.r.t. a reference inner product H L(X, X ): HA = A H and HP L = PL H. The Arnoldi process for K k (P 1 L A; P 1 L r 0) is carried out w.r.t. the inner product W = HP L and P 1 L r HP L = b Ax P is minimized. This corresponds to canonical GMRES with L H P 1 L A non-singular, P = id, W = M = HP L. [(Pestana and Wathen, 2013, preprint version)] Preconditioned GMRES Revisited Vancouver 31 / 32

90 GMRES in the Literature VII Pestana & Wathen also consider a 'left preconditioned' GMRES P 1 L Ax = P 1 L b. where A L(X ). Both A and the 'left preconditioner' P L L(X ) are assumed to be ( ) X, H -Hilbert space self-adjoint w.r.t. a reference inner product H L(X, X ): HA = A H and HP L = PL H. The Arnoldi process for K k (P 1 L A; P 1 L r 0) is carried out w.r.t. the inner product W = HP L and P 1 L r HP L = b Ax P is minimized. This corresponds to canonical GMRES with P 1 L A non-singular, P = id, W = M = HP L. U The (SRC) holds by construction. D The requirements appear to constitute a restriction compared to full 'canonical GMRES'. [(Pestana and Wathen, 2013, preprint version)] Preconditioned GMRES Revisited Vancouver 31 / 32 L H

91 Table of Contents 1 GMRES from First Principles (and a Hilbert Space Perspective) 2 How does it Compare With...? 3 Conclusions Preconditioned GMRES Revisited Vancouver 32 / 32

92 Concluding Remarks We have re-derived 'canonical GMRES' from a Hilbert space perspective for Ax = b in X (and Ax = b in X ). It appears natural that the user 1 rst chooses the inner product (, ) M 1 in X for residual minimization, 2 then the preconditioner P L(X, X ), so that ideally (SRC) or at least (NC) holds. If (NC) is achieved, then P is responsible for fast convergence (eigenvalues of AP 1 ), while M is responsible for being able to observe a meaningful norm of the residual. Don't use Matlab or PETSc implementations of GMRES unless you are determined to measure (and stop on) the residual norm r 2, since they make the choice M = id for you. Should we say MINRES rather than GMRES when (SRC) hold? Preconditioned GMRES Revisited Vancouver 32 / 32

93 Concluding Remarks We have re-derived 'canonical GMRES' from a Hilbert space perspective for Ax = b in X (and Ax = b in X ). It appears natural that the user 1 rst chooses the inner product (, ) M 1 in X for residual minimization, 2 then the preconditioner P L(X, X ), so that ideally (SRC) or at least (NC) holds. If (NC) is achieved, then P is responsible for fast convergence (eigenvalues of AP 1 ), while M is responsible for being able to observe a meaningful norm of the residual. Don't use Matlab or PETSc implementations of GMRES unless you are determined to measure (and stop on) the residual norm r 2, since they make the choice M = id for you. Should we say MINRES rather than GMRES when (SRC) hold? Thank You Preconditioned GMRES Revisited Vancouver 32 / 32

94 References I Bramble, J. and Pasciak, J. (1988). A preconditioning technique for indenite systems resulting from mixed approximations of elliptic problems. Mathematics of Computation 50: 117. Calvetti, D., Lewis, B. and Reichel, L. (2002). On the regularizing properties of the GMRES method. Numerische Mathematik 91: , doi: /s Campbell, S. L., Ipsen, I. C. F., Kelley, C. T. and Meyer, C. D. (1996). GMRES and the minimal polynomial. BIT. Numerical Mathematics 36: , doi: /BF Chan, T. F., Chow, E., Saad, Y. and Yeung, M. C. (1998). Preserving symmetry in preconditioned Krylov subspace methods. SIAM Journal on Scientic Computing 20: , doi: /S Chen, J.-Y., Kincaid, D. R. and Young, D. M. (1999). Generalizations and modications of the GMRES iterative method. Numerical Algorithms 21: , doi: /A: , numerical methods for partial dierential equations (Marrakech, 1998). Eiermann, M. and Ernst, O. G. (2001). Geometric aspects of the theory of Krylov subspace methods. Acta Numerica 10: , doi: /S Elman, H., Silvester, D. and Wathen, A. (2014). Finite elements and fast iterative solvers: with applications in incompressible uid dynamics. Numerical Mathematics and Scientic Computation. Oxford University Press, 2nd ed. Preconditioned GMRES Revisited Vancouver 33 / 32

95 References II Ernst, O. (2000). Minimal and orthogonal residual methods and their generalizations for solving linear operator equations. Habilitation Thesis, Faculty of Mathematics and Computer Sciences, TU Bergakademie Freiberg. Gasparo, M. G., Papini, A. and Pasquali, A. (2008). Some properties of GMRES in Hilbert spaces. Numerical Functional Analysis and Optimization. An International Journal 29: , doi: / Günnel, A., Herzog, R. and Sachs, E. (2014). A note on preconditioners and scalar products in Krylov subspace methods for self-adjoint problems in Hilbert space. Electronic Transactions on Numerical Analysis 41: Klawonn, A. (1998). Block-triangular preconditioners for saddle point problems with a penalty term. SIAM Journal on Scientic Computing 19: (electronic), doi: /S , special issue on iterative methods (Copper Mountain, CO, 1996). Moret, I. (1997). A note on the superlinear convergence of GMRES. SIAM Journal on Numerical Analysis 34: , doi: /S Pestana, J. and Wathen, A. J. (2013). On the choice of preconditioner for minimum residual methods for non-hermitian matrices. Journal of Computational and Applied Mathematics 249: 5768, doi: /j.cam Saad, Y. (2003). Iterative Methods for Sparse Linear Systems. Philadelphia: SIAM, 2nd ed. Preconditioned GMRES Revisited Vancouver 34 / 32

96 References III Saad, Y. and Schultz, M. H. (1986). GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. Society for Industrial and Applied Mathematics. Journal on Scientic and Statistical Computing 7: , doi: / Sarkis, M. and Szyld, D. B. (2007). Optimal left and right additive Schwarz preconditioning for minimal residual methods with Euclidean and energy norms. Computer Methods in Applied Mechanics and Engineering 196: , doi: /j.cma Simoncini, V. (2004). Block triangular preconditioners for symmetric saddle-point problems. Applied Numerical Mathematics. An IMACS Journal 49: 6380, doi: /j.apnum Starke, G. (1997). Field-of-values analysis of preconditioned iterative methods for nonsymmetric elliptic problems. Numerische Mathematik 78: , doi: /s Preconditioned GMRES Revisited Vancouver 35 / 32

Combination Preconditioning of saddle-point systems for positive definiteness

Combination Preconditioning of saddle-point systems for positive definiteness Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30 compute iterates with residuals Krylov

More information

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 41, pp. 13-20, 2014. Copyright 2014,. ISSN 1068-9613. A NOTE ON PRECONDITIONERS AND SCALAR PRODUCTS IN KRYLOV SUBSPACE METHODS FOR SELF-ADJOINT PROBLEMS

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

On the interplay between discretization and preconditioning of Krylov subspace methods

On the interplay between discretization and preconditioning of Krylov subspace methods On the interplay between discretization and preconditioning of Krylov subspace methods Josef Málek and Zdeněk Strakoš Nečas Center for Mathematical Modeling Charles University in Prague and Czech Academy

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

ON THE ROLE OF COMMUTATOR ARGUMENTS IN THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLEMS

ON THE ROLE OF COMMUTATOR ARGUMENTS IN THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLEMS ON THE ROLE OF COUTATOR ARGUENTS IN THE DEVELOPENT OF PARAETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLES JOHN W. PEARSON Abstract. The development of preconditioners for PDE-constrained optimization

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Simple iteration procedure

Simple iteration procedure Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

SOLVING HERMITIAN POSITIVE DEFINITE SYSTEMS USING INDEFINITE INCOMPLETE FACTORIZATIONS

SOLVING HERMITIAN POSITIVE DEFINITE SYSTEMS USING INDEFINITE INCOMPLETE FACTORIZATIONS SOLVING HERMITIAN POSITIVE DEFINITE SYSTEMS USING INDEFINITE INCOMPLETE FACTORIZATIONS HAIM AVRON, ANSHUL GUPTA, AND SIVAN TOLEDO Abstract. Incomplete LDL factorizations sometimes produce an indenite preconditioner

More information

STOPPING CRITERIA FOR MIXED FINITE ELEMENT PROBLEMS

STOPPING CRITERIA FOR MIXED FINITE ELEMENT PROBLEMS STOPPING CRITERIA FOR MIXED FINITE ELEMENT PROBLEMS M. ARIOLI 1,3 AND D. LOGHIN 2 Abstract. We study stopping criteria that are suitable in the solution by Krylov space based methods of linear and non

More information

UNIVERSITY OF MARYLAND PRECONDITIONERS FOR THE DISCRETE STEADY-STATE NAVIER-STOKES EQUATIONS CS-TR #4164 / UMIACS TR # JULY 2000

UNIVERSITY OF MARYLAND PRECONDITIONERS FOR THE DISCRETE STEADY-STATE NAVIER-STOKES EQUATIONS CS-TR #4164 / UMIACS TR # JULY 2000 UNIVERSITY OF MARYLAND INSTITUTE FOR ADVANCED COMPUTER STUDIES DEPARTMENT OF COMPUTER SCIENCE PERFORMANCE AND ANALYSIS OF SADDLE POINT PRECONDITIONERS FOR THE DISCRETE STEADY-STATE NAVIER-STOKES EQUATIONS

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Peter Deuhard. for Symmetric Indenite Linear Systems

Peter Deuhard. for Symmetric Indenite Linear Systems Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993) Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7

More information

C. Vuik 1 R. Nabben 2 J.M. Tang 1

C. Vuik 1 R. Nabben 2 J.M. Tang 1 Deflation acceleration of block ILU preconditioned Krylov methods C. Vuik 1 R. Nabben 2 J.M. Tang 1 1 Delft University of Technology J.M. Burgerscentrum 2 Technical University Berlin Institut für Mathematik

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties V. Simoncini Dipartimento di Matematica, Università di ologna valeria@dm.unibo.it 1 Outline

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

Preconditioning of Saddle Point Systems by Substructuring and a Penalty Approach

Preconditioning of Saddle Point Systems by Substructuring and a Penalty Approach Preconditioning of Saddle Point Systems by Substructuring and a Penalty Approach Clark R. Dohrmann 1 Sandia National Laboratories, crdohrm@sandia.gov. Sandia is a multiprogram laboratory operated by Sandia

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994 Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Structured Preconditioners for Saddle Point Problems

Structured Preconditioners for Saddle Point Problems Structured Preconditioners for Saddle Point Problems V. Simoncini Dipartimento di Matematica Università di Bologna valeria@dm.unibo.it p. 1 Collaborators on this project Mario Arioli, RAL, UK Michele Benzi,

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Laboratoire d'informatique Fondamentale de Lille

Laboratoire d'informatique Fondamentale de Lille Laboratoire d'informatique Fondamentale de Lille Publication AS-181 Modied Krylov acceleration for parallel environments C. Le Calvez & Y. Saad February 1998 c LIFL USTL UNIVERSITE DES SCIENCES ET TECHNOLOGIES

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

GMRESR: A family of nested GMRES methods

GMRESR: A family of nested GMRES methods GMRESR: A family of nested GMRES methods Report 91-80 H.A. van der Vorst C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (2002) 90: 665 688 Digital Object Identifier (DOI) 10.1007/s002110100300 Numerische Mathematik Performance and analysis of saddle point preconditioners for the discrete steady-state Navier-Stokes

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

Elements of linear algebra

Elements of linear algebra Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-16 An elegant IDR(s) variant that efficiently exploits bi-orthogonality properties Martin B. van Gijzen and Peter Sonneveld ISSN 1389-6520 Reports of the Department

More information

Master Thesis Literature Study Presentation

Master Thesis Literature Study Presentation Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element

More information

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods. Contemporary Mathematics Volume 00, 0000 Domain Decomposition Methods for Monotone Nonlinear Elliptic Problems XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA Abstract. In this paper, we study several overlapping

More information

Parallel sparse linear solvers and applications in CFD

Parallel sparse linear solvers and applications in CFD Parallel sparse linear solvers and applications in CFD Jocelyne Erhel Joint work with Désiré Nuentsa Wakam () and Baptiste Poirriez () SAGE team, Inria Rennes, France journée Calcul Intensif Distribué

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information