Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Size: px
Start display at page:

Download "Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems"

Transcription

1 Abstract and Applied Analysis Article ID pages Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Yu-Chien Li Jen-Yuan Chen and David R Kincaid 2 Department of Mathematics National Kaohsiung Normal University Kaohsiung 824 Taiwan 2 Institute for Computational Engineering and Sciences The University of Texas at Austin Austin TX 7872 USA Correspondence should be addressed to Jen-Yuan Chen; jchen@nknuedutw Received 27 November 203; Revised 0 January 204; Accepted 4 February 204; Published 3 April 204 Academic Editor: Chi-Ming Chen Copyright 204 Yu-Chien Li et al This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited We discuss a variety of iterative methods that are based on the Arnoldi process for solving large sparse symmetric indefinite linear systems We describe the SYMMLQ and SYMMQR methods as well as generalizations and modifications of them Then we cover the Lanczos/MSYMMLQ and Lanczos/MSYMMQR methods which arise from a double linear system We present pseudocodes for these algorithms The authors dedicate this paper to the memory of Professor David M Young Jr for his pioneering research inspirational teaching and exceptional life Introduction Frequently when computing numerical solutions of partial differential equations one needs to solve systems of very large sparse linear algebraic equations of the form Ax = b () A is an n nmatrix b is an n vector and one seeks a numericalsolutionvector x or a good approximation of it Particularly for large linear systems arising from partial differential equations in three dimensions well-known direct methods such as Gaussian elimination may become prohibitively expensive in terms of both computer storage and computer time On the other hand a variety of iterative methods may avoid these difficulties For linear systems involving symmetric positive definite (SPD) matrices the conjugate gradient (CG) method (and variations of it) may work well On the other hand when solving linear systems the coefficient matrix A is symmetric indefinite the choice of a suitable iterative method is not at all clear On the other hand the SYMMLQ and MINRES methods have been shown to be useful in certain situations (see Paige and Saunders ) For nonsymmetric systems Saad and Schultz 2 generalized the MINRES method to obtain the GMRES method In Section 2 wereviewthearnoldiprocessinsections 3 and 4 we describe the SYMMLQ and SYMMQR methods Then we can generalize them in Section 5andweoutlinethe modified SYMMLQ method in Section 6 Next in Section 7 we discuss applying the MSYMMLQ and MSYMMQR methods applied to a double linear system Finally we present pseudocodes in Sections 8 2 Arnoldi Process WebeginwithareviewoftheArnoldiprocess Theorem Suppose that A is an n nsymmetric matrix One can generate orthonormal vectors w (0) w () w (n 2) w (n ) using this short-term recurrence w (j+) Aw (j) α j w (j) β j w (j ) (0 j n 2) w (j+) =( σ j+ ) w (j+) σ j+ = w (j+) w (j+) (2)

2 2 Abstract and Applied Analysis Example 2 We illustrate Theorem for the case n=3 α j = Aw (j) w (j) β j = Aw (j) w (j ) (3) From (2)and(3) we have σ w () = w () = Aw (0) α 0 w (0) β 0 w ( ) Here one assumes that w ( ) =0and σ j =0foralljThenthe following properties hold for (0 i j n ): w (i) w (j) =δ ij β j =σ j (4) Proof If we let w (0) r (0) = b Au (0) then the subspace Span {w (0) w () w (n 2) w (n ) } (5) is equivalent to the Krylov subspace K n (r (0) A) = Span {r (0) Ar (0) A n 2 r (0) A n r (0) } (6) We obtain β j = Aw (j) w (j ) (by (3)) = w (j) Aw (j ) (A T = A) = w (j) (σ j w (j) +α j w (j ) +β j w (j 2) ) (by (2)) =σ j (7) since w (i) w (j) =δ ij From Theorem in matrix formit follows that σ 2 w (2) = w (2) = Aw () α w () β w (0) σ 3 w (3) = w (3) = Aw (2) α 2 w (2) β 2 w () Consequently we obtain since w ( ) =0 So we obtain AW 2 = A w (0) w () w (2) = Aw (0) Aw () Aw (2) = β 0 w ( ) +α 0 w (0) +σ w () β w (0) +α w () +σ 2 w (2) β 2 w () +α 2 w (2) +σ 3 w (3) = w (0) w () w (2) w (3) α 0 β 0 σ α β 2 0 σ 2 α σ T 3 = W 2 w (3) 0 0 σ 3 T (0) () AW 2 = W 2 T 3 +σ 3 w (3) e T 3 = W 3 T 4 (2) AW n = W n T n +σ n w (n) e T n = W n T n+ W n =w (0) w () w (n ) w (n 2) w (n ) n n α 0 β σ α β 2 T n e T n = n σ 2 α 2 β 3 σ n α n 3 β n 2 σ n 2 α n 2 β n σ n α n α 0 β σ α β 2 σ 2 α 2 β 3 T n+ σ n 3 α n 3 β n 2 σ n 2 α n 2 β n σ n α n σ n n n (n+) n (8) (9) 3 SYMMLQ Method We choose u (n) suchthatu (n) u (0) K n (r (0) A)Hencewe have u (n) = u (0) + W n y (n) (3) r (n) = r (0) AW n y (n) Imposing the Galerkin condition r (n) K n (r (0) A) we obtain We obtain because (r (n) ) T W n = 0 W T n r(n) = 0 W T n r(0) = W T n AW n y (n) (by (3)) (4) T n y (n) =σ 0 e (5) W T n AW n = T n (6) W T n r(0) =σ 0 e σ 0 = r(0)

3 Abstract and Applied Analysis 3 Instead of solving for y (n) directly from the triangular linear system (5) Paige and Saunders factorize the matrix T n into a lower triangular matrix with bandwidth three (resulting in the SYMMLQ method) Also we have γ 0 δ γ ε 2 δ 2 γ 2 T n Q n = L n = ε n 3 δ n 3 γ n 3 ε n 2 δ n 2 γ n 2 ε n δ n γ n n n (7) Q n Q 2 Q 23 Q n n is an orthogonal matrix and we have We let x (n) = x (0) + V n ẑ n (25) V n =v (0) v () v (n 2) v (n ) n n (26) z n =ζ 0 ζ ζ n 2 ζ n T n V n =v (0) v () v (n 2) v (n ) n n (27) L n z (n) =σ 0 e d c Q ii+ = i s i s i c i d c 2 i +s 2 i =SinceT n Q n = L n wehave n n (8) γ 0 δ γ ε 2 δ 2 γ 2 L n = ε n 3 δ n 3 γ n 3 ε n 2 δ n 2 γ n 2 ε n δ n γ n γ n = γ 2 n +β2 n c n = γ n γ n n n s n = β n γ n (28) From (2) and(28) we have ζ n =( ζ n /ζ n ) ζ n =c n ζn Since T n Q n Q n y(n) = L n Q n y(n) =σ 0 e (9) V n+ = v (0) v () v (n ) v (n) = W n Q n Letting = W n w (n) Q 2 Q 23 Q n n Q nn+ (29) then Next letting ẑ (n) = Q n y(n) (20) L n ẑ (n) =σ 0 e (2) we have = V n w (n) Q nn+ = v (0) v () v (n ) w (n) Q nn+ v (n ) =c n v (n ) +s n w (n) v (n) =s n v (n ) c n w (n) (30) we have Defining x (n) = u (0) + W n y (n) (22) x (n) = u (0) + W n Q n Q n y(n) = u (0) + W n Q n ẑ (n) V n = W n Q n (23) ẑ n =ζ 0 ζ ζ n 2 ζ n T n (24) If β n =0thenL n is nonsingular We can find z (n) by solving L n z (n) =σ 0 e u (n) = u (0) + V n z (n) = u (n ) +ζ n v (n ) 4 SYMMQR Method (3) We choose u (0) such that u (n) u (0) K n (r (0) A)Hencewe have u (n) = u (0) + W n y (n) r (n) = r (0) AW n y (n) (32)

4 4 Abstract and Applied Analysis Imposing the Galerkin condition r (n) K n (r (0) A) as before we obtain Since we have (r (n) ) T W n =0 W T n r(n) =0 W T n r(0) = W T n AW n y (n) (by (32)) (33) W T n AW n = T n (34) W T n r(0) =σ 0 e σ 0 = r(0) T n y (n) =σ 0 e (35) Instead of solving for y (n) directly from the triangular system (35) Paige and Saunders factorizedthematrixt n into a lower triangular matrix with bandwidth three We can use a different factorization of T n to obtain a slightly different method which is called the SYMMQR method We multiply the matrix T n by an orthogonal matrix ontheleft-handsideinsteadoftheright-handsidewehave Q n T n ŷ n = Q n σ 0 e (36) γ 0 δ ε 2 γ δ 2 ε 3 γ 2 δ 3 ε 4 Q n T n = R = γ n 3 δ n 2 ε n γ n 2 δ n γ n n n (37) We obtain the matrix Q n Q n n Q n 2n Q 2 d c Q ii+ = i s i s i c i d n n (38) with c 2 i +s 2 i =being the Givens rotation Letting ŷ n be the solution of Rŷ n = Q n σ 0 e (39) which satisfies the Galerkin condition r n K n (r (0) A) r n = b A x (n) Wenotethat γ n is not always nonzero and thus R n might be singular We assume that R n is nonsingular and then we define We have P n = W n R n (4) P n =p (0) p () p (n 2) p (n ) n n ẑ (n) = Q n σ 0 e =ζ 0 ζ ζ n 2 ζ n T (42) x (n) = u (0) + P n ẑ (n) (43) For the next iterate x (n+) we need to solve T n+ ŷ (n+) =σ 0 e (44) α 0 β β α β 2 β 2 α 2 β 3 T n+ = β n 2 α n 2 β n β n α n β n β n α n n+ n (45) Applying the Givens rotation Q n to both sides of (45) we have γ 0 δ ε 2 γ δ 2 ε 3 γ 2 δ 3 ε 4 Q n T n+ = γ n 3 δ n 2 ε n γ n 2 δ n ε n γ n ψ β n α n n+ n (46) 00000ε n ψα n T = Q n n β n α n T To eliminate β n we compute thenth Given rotation Q nn+ by then we have x (n) = u (0) + W n ŷ n = u (0) + W n R n Q n σ 0 e (40) γ n = γ 2 n +β2 n s n = β n γ n c n = γ n γ n (47)

5 Abstract and Applied Analysis 5 By multiplying Q nn+ times Q n T n and times Q n σ 0 e we have We have Q nn+ (Q n T n+ )= R n+ γ 0 δ ε 2 γ δ 2 ε 3 γ 2 δ 3 ε 4 = γ n 3 δ n 2 ε n γ n 2 δ n ε n γ n δ n γ n n+ n Q nn+ (Q n σ 0 e )=ẑ (n+) =ζ 0 ζ ζ n ζ n T (48) Q nn+q n ( T n+ y (n) σ 0 e ) ζ 0 ζ R ζ = n 0000 y (n) n+ n ζ n ζ n (53) Hence the solution y (n) from R n y (n) = z (n) minimizes T n+ y (n) σ 0 e and ζ n = T n+ y (n) σ 0 e Let 00000ε n δ n γ n T u (n) = u (0) + W n y (n) = u (0) + W n R n z(n) (54) = Q nn ε n ψα n T ζ n =c n ζn ζ n =s n ζn (49) P n R n = W n P n =p (0) p () p (n 2) p (n ) (55) Let We have γ 0 δ ε 2 γ δ 2 ε 3 γ 2 δ 3 ε 4 R n = γ n 3 δ n 2 ε n γ n 2 δ n γ n n n (50) Since p (n ) =( γ n )w (n ) ε n p (n 3) δ n p (n 2) u (n) = u (0) + P n z (n) = u (n ) +γ n p (n ) (56) We define z (n) = ζ 0 ζ ζ n 2 ζ n T Since β n = w (n) =0thenγ n =0 and R n is nonsingular We can solve for y (n) from we obtain x (n+) = u (0) + W n ŷ (n+) (57) R n y (n) = z (n) (5) We discuss the case β n = w (n) later Consider solving the least square problem involving y (n) minimizing T n+ y (n) σ 0 e x (n+) = u (0) + W n R n+ẑ(n+) = u (0) + P n ẑ (n+) = u (0) + P n z (n) + ζ n p (n) = u (n) + ζ n p (n) (58) α 0 β β α β 2 β 2 α 2 β 3 T n+ = β n 2 α n 2 β n β n α n β n n+ n (52) We note that x (n) is the estimated solution vector satisfying the Galerkin condition while u (n) = u (0) + W n y (n) (59) with y (n) minimizing T n+ y (n) σ 0 e

6 6 Abstract and Applied Analysis 5 Generalized SYMMLQ and SYMMQR Methods Now we generalize the SYMMLQ and SYMMQR methods Theorem 3 Suppose that E is an n nsymmetric positive definite (SPD) matrix and EA is an n n symmetric matrix One can generate orthonormal vectors w (0) w () w (n 2) w (n ) using this short-term recurrence w (j+) Aw (j) α j w (j) β j w (j ) (0 j n 2) w (j+) =( γ j+ ) w (j+) γ j+ = E w(j+) w (j+) α j = EAw (j) w (j) β j = EAw (j) w (j ) (60) (6) As before we let x (n) = x (0) + W n y (n) r (n) = r (0) AW n y (n) ImposingtheGalerkinconditionagainwehave (Er (n) ) T W n = 0 (EW n ) T r (n) = 0 (EW n ) T r (0) =(EW n ) T AW n y (n) (by (67)) We obtain T n y (n) =β e β = Er(0) r (0) because (EW n ) T AW n = T n (EW n ) T W n = I (67) (68) (69) (70) Then the following properties hold for (0 i j n ): Proof We obtain w (j) Ew (j) =δ ij β j =γ j (62) β j = EAw (j) w (j ) (by (6)) = w (j) EAw (j ) ((EA) T = EA) = w (j) E (γ j w (j) +α j w (j ) +β j w (j 2) ) = w (j) γ j Ew (j) =γ j Since w (i) Ew (j) =δ ij As before we let Moreover we have (by (60)) W n =w () w (2) w (n ) w (n) n n α β 2 σ 2 α 2 β 3 T n = e T n = n (63) (64) AW n = W n T n +γ n+ w (n+) e T n (65) σ 3 α 3 β 4 σ n 2 α n 2 β n σ n α n β n σ n α n n n (66) (EW n ) T r (0) =β e Since T n is symmetric we can apply the same techniques as in the SYMMLQ method Also if E = Ithemethodreduces to the SYMMQR method 6 Modified SYMMLQ Method Next we outline the modified SYMMLQ method Theorem 4 Suppose that E is an n nsymmetric (not necessary positive definite) matrix and EA is an n n symmetric matrix One can generate orthonormal vectors w (0) w () w (n 2) w (n ) using this short-term recurrence w (j+) Aw (j) α j w (j) β j w (j ) (0 j n 2) w (j+) =( γ j+ ) w (j+) + = E w(j+) w (j+) α j = EAw(j) w (j) Ew (j) w (j) β j = EAw(j) w (j ) Ew (j ) w (j ) Then the following properties hold for (0 j n 2): d j+ = Ew (j+) w (j+) and for (0 i j n ) = { if E w(j+) w (j+) >0 if E w (j+) w (j+) <0 (7) (72) (73) Ew (i) w (j) =δ ij (74)

7 Abstract and Applied Analysis 7 From Theorem 4in matrix formwe obtain Then we have AW n = W n T n +γ n+ w (n+) e T n (75) (EW n ) T AW n =(EW n ) T W n T n +(EW n ) T (γ n+ w (n+) e T n ) = D n T n + 0 (78) Herethesecondtermontheright-handsideisthezero matrix! In addition we have W n =w () w (2) w (n 2) w (n ) w (n) n n x (n) = x (0) + W n y (n) r (n) = r (0) AW n y (n) (79) α β 2 T n = e T n = n γ 2 α 2 β 3 γ 3 α 3 β 4 γ n 2 α n β n γ n α n β n γ n α n n n (76) Imposing the Galerkin condition r (n) K n (r (0) A) aswe did before we obtain (Er (n) ) T W n = 0 (EW n ) T r (n) = 0 (EW n ) T r (0) =(EW n ) T AW n y (n) (by (79)) (EW n ) T r (0) = D n T n y (n) (by (78)) In other words we use (80) Moreover from Theorem 4 we obtain (EW n ) T W n = d d2 d 3 d d n 2 dn D n d n n n (77) (EW n ) T AW n = D n T n (8) (EW n ) T W n = D n We obtain (EW n ) T r (0) =β e (82) because D n T n y (n) =β e β = Er(0) r (0) (83) Here d α d β 2 d 2 γ 2 d 2 α 2 d 2 β 3 d 3 γ 3 d 3 α 3 d 3 α 4 D n T n = d n 2 γ n 2 d n 2 α n 2 d n 2 β n d n γ n d n α n d n β n d n γ n d n α n n n (84) We note that D n T n is symmetric for ( i n ): d i β i+ =d i EAw (i+) w (i) Ew (i) w (i) = d i w (i+) EAw (i) Ew (i) w (i) (by (72)) ((EA) T = EA) =d i w (i+) E (+ w (i+) +α i w (i) +β i w (i ) Ew (i) w (i) =d i+ γ j+ (by (7)) (85)

8 8 Abstract and Applied Analysis 7 Lanczos/MSYMMLQ Method Next we consider this 2n 2n double linear system: A 0 0 A T x x =b b (86) We obtain the block symmetric matrices A E andea A = A 0 0 A T E = 0 I I 0 EA = 0 AT A 0 (87) For example the modified SYMMLQ method and the modified SYMMQR method can be applied to the double linear system (86) This leads us to the LAN/MSYMMLQ method and the LAN/MSYMMQR method The pseudocodes for these methods are given in the following sections For additional details see Li 3 See the books by Golub and Van Loan 4 and Saad 5 as well as the papers by Lanczos 6 and Kincaid et al 7 among others 8 MSYMMLQ Pseudocode x () = x (0) + ζ v () c = d α α 2 +β2 2 s = d β 2 α 2 +β2 2 γ =(d α )c +(d β 2 )s ζ =( β γ ) v () =c i v () +s w (2) x () = x (0) +ζ v () for i=23n α i =( d i ) EAw (i) w (i) β i =( d i ) EAw (i) w (i ) w (i+) = Aw (i) α i w (i) β i w (i ) β i+ = E w(i+) w (i+) w (i+) =( β i+ ) w (i+) (88) r (0) = b Ax (0) β = Er(0) r (0) w () =( ) r (0) β d = { if Er (0) Er (0) >0 { { if Er (0) Er (0) <0 ε =ε 2 =0 s 0 =0 c 0 = α =( ) EAw () w () d w (2) = Aw () α w () d i+ ={ if E w(i+) w (i+) >0 if E w (i+) w (i+) <0 ε i =(d i β i )s i 2 δ i = (d i β i )c i 2 hh = δ i δ i = (hh) c i +(d i α i )s i = (hh) s i (d i α i )c i ζ i =( )( ε i ζ i 2 δ i ζ i ) v (i) =s i v (i ) c i w (i) x (i) = x (i ) + ζ i v (i) = α 2 i +β 2 i+ β 2 = E w(2) w (2) w (2) =( β 2 ) w (2) c i =d i ( α i ) s i =d i ( β i+ ) ζ i =( γ )( ε i ζ i 2 δ i ζ i ) d 2 ={ if E w(2) w (2) >0 if E w (2) w (2) <0 v (i) =c v (i) +s i w (i+) ζ = β α v () = w () end for x (i) = x (i ) +ζ i v (i) (89)

9 Abstract and Applied Analysis 9 9 MSYMMQR Pseudocode r (0) = b Ax (0) β = Er(0) r (0) w () =( β ) r (0) d ={ if Er(0) Er (0) >0 if Er (0) Er (0) <0 ε =ε 2 =0 s 0 =0 c 0 = α =( d ) EAw () w () w (2) = Aw () α w () β 2 = E w(2) w (2) w (2) =( β 2 ) w (2) d 2 ={ if E w(2) w (2) >0 if E w (2) w (2) <0 ζ =β d p () =( α d ) w () x () = x (0) + ζ p () c = d α α 2 +β2 2 s = d β 2 α 2 +β2 2 γ =(d α )c (d β 2 )s ζ =c ζ p () =( γ ) w () end for d i+ ={ if E w(i+) w (i+) >0 if E w (i+) w i+ < 0 ε i = (d i β i )s i 2 δ i = (d i β i )c i 2 hh = δ i δ i = (hh) c i (d i α i )s i = (hh) s i +(d i α i )c i ζi =s i ζi p (i) =( )w (i) ε i p (i 2) δ i p (i ) x (i) = x (i ) + ζ i p (i) = ζ2 i +β 2 i+ c i = s i = ( )(d i β i+ ) ζ i =c i ζi p (i) =( )w (i) ε i p (i 2) δ i p (i ) x (i) = x (i ) +ζ i p (i) 0 LAN/MSYMMLQ Pseudocode r (0) = b Ax (0) r (0) = b A T x (0) β = 2 r(0) r (0) w () =( β ) r (0) (9) x () = x (0) +ζ p () for i=23n α i =( d i ) EAw (i) w (i) β i =( d i ) EAw (i) w (i ) w (i+) = Aw (i) α i w (i) β i w (i ) β i+ = E w(i+) w (i+) w (i+) =( β i+ ) w (i+) (90) ŵ () =( β ) r (0) d ={ if 2 r(0) r (0) >0 if 2 r (0) r (0) <0 ε =ε 2 =0 s 0 =0 c 0 = α =( 2 d ) ŵ () w () w (2) = Aw () α w () ŵ (2) = A T w () α ŵ () β 2 = 2 ŵ (2) w (2)

10 0 Abstract and Applied Analysis w 2 =( β 2 ) w (2) ŵ (2) =( β 2 ) ŵ (2) d 2 = { if 2 ŵ (2) w (2) >0 { { if 2 ŵ (2) w (2) <0 ζ = β α v () = w () x () = x (0) + ζ v () c = d α s = d β 2 α 2 +β2 2 α 2 +β2 2 end for v (i) =s i v (i ) c i w (i) x (i) = x (i ) + ζ i v (i) = α 2 i +β 2 i+ c i =d i ( α i ) ζ i =( )( ε i ζ i 2 δ i ζ i ) v (i) =c i v (i) +s i w (i+) x (i) = x (i ) +ζ i v (i) LAN/MSYMMQR Pseudocode s i =d i ( β i+ ) (93) γ =(d α )c +(d β 2 )s ζ = β γ v () =c v () +s w (2) x () = x (0) +ζ v () for i = 2 3 N α i =( 2 d i ) Aw (i) ŵ (i) (92) r (0) = b Ax (0) r (0) = b A T x (0) β = 2 r(0) r (0) w () =( ) r (0) β ŵ () =( ) r (0) β β i =( d i ) Aw (i) ŵ (i ) + ŵ (i) Aw (i ) w (i+) = Aw (i) α i w (i) β i w (i ) ŵ (i+) = A T ŵ (i) α i ŵ (i) β i ŵ (i ) β i+ = 2 w(i+) w (i+) w (i+) =( β i+ ) w (i+) ŵ (i+) =( β i+ ) ŵ (i+) d i+ ={ if 2 w(i+) w (i+) >0 if 2 w (i+) w i+ <0 ε i =d i β i s i 2 δ i = (d i β i )c i 2 hh = δ i δ i = (hh) c i +(d i α i )s i = (hh) s i (d i α i )c i ζ i =( )( ε i ζ i 2 δ i ζ i ) d ={ if 2 r(0) r (0) >0 if 2 r (0) r (0) <0 ε =ε 2 =0 s 0 =0 c 0 = α =( 2 d ) ŵ () w () w (2) = Aw () α w () ŵ (2) = A T w () α ŵ (2) β 2 = 2 ŵ (2) w (2) w (2) =( β 2 ) w (2) ŵ (2) =( β 2 ) ŵ (2) d 2 = { if 2 ŵ (2) w (2) >0 { if 2 ŵ (2) w { (2) <0 p () =( α i d i ) w ()

11 Abstract and Applied Analysis ζ = β α x () = x (0) + ζ p () c = d α α 2 +β2 2 s = d β 2 α 2 +β2 2 end for p (i) =( )w (i) ε i p (i 2) δ i p (i ) x (i) = x (i ) +ζ i p (i) (95) γ =(d α )c (d β 2 )s ζ =β γ p () =( γ ) w () x () = x (0) +ζ p () for i=23n α i =( 2 d i ) Aw (i) ŵ (i) β i =( d i ) Aw (i) ŵ (i ) + ŵ (i) Aw (i ) w (i+) = Aw (i) α i w (i) β i w (i ) ŵ (i+) = A T ŵ (i) α i ŵ (i) β i ŵ (i ) β i+ = 2 w(i+) w (i+) w (i+) =( β i+ ) w (i+) ŵ (i+) =( β i+ ) ŵ (i+) d i+ ={ if 2 w(i+) w (i+) >0 if 2 w (i+) w (i+) <0 (94) Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper References C C Paige and M A Saunders Solutions of sparse indefinite systems of linear equations SIAM Journal on Numerical Analysisvol2no4pp Y Saad and M H Schultz GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems SIAM Journal on Scientific and Statistical Computingvol7no 3pp Y Li The Modified MINRES methods for solving large sparse nonsymmetric linear systems MS thesis Department of Mathematics National Kaohsiung Normal University Kaohsiung Taiwan GHGolubandCFVanLoanMatrix Computations Johns Hopkins University Press Baltimore Md USA 3rd edition Y Saad IterativeMethodsforSparseLinearSystemsSIAM2nd edition C Lanczos An iteration method for the solution of the eigenvalue problem of linear differential and integral operators Research of the National Bureau of Standardsvol45 no 4 pp D R Kincaid D M Young and J-Y Chen An overview of MGMRES and LAN/MGMRES methods for solving nonsymmetric linear systems Taiwanese Mathematics vol 4 no 3 pp ε i =(d i β i )s i 2 δ i =(d i β i )c i 2 hh = δ i δ i = (hh) c i d i (α i s i ) = (hh) sc i +d i (α i c i ) ζ i = ζi s i p (i) =( )w (i) ε i p (i) δ i p (i ) x (i) = x (i ) + ζ i p (i) = γ 2 +β 2 i+ c i =( ) s i = d i ( β i+ ) ζ i =c i ζi

12 Advances in Operations Research Advances in Decision Sciences Applied Mathematics Algebra Probability and Statistics The Scientific World Journal International Differential Equations Submit your manuscripts at International Advances in Combinatorics Mathematical Physics Complex Analysis International Mathematics and Mathematical Sciences Mathematical Problems in Engineering Mathematics Discrete Mathematics Discrete Dynamics in Nature and Society Function Spaces Abstract and Applied Analysis International Stochastic Analysis Optimization

Research Article Residual Iterative Method for Solving Absolute Value Equations

Research Article Residual Iterative Method for Solving Absolute Value Equations Abstract and Applied Analysis Volume 2012, Article ID 406232, 9 pages doi:10.1155/2012/406232 Research Article Residual Iterative Method for Solving Absolute Value Equations Muhammad Aslam Noor, 1 Javed

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Solving Separable Nonlinear Equations Using LU Factorization

Solving Separable Nonlinear Equations Using LU Factorization Western Washington University Western CEDAR Mathematics College of Science and Engineering 03 Solving Separable Nonlinear Equations Using LU Factorization Yun-Qiu Shen Western Washington University, yunqiu.shen@wwu.edu

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Celebrating Fifty Years of David M. Young s Successive Overrelaxation Method

Celebrating Fifty Years of David M. Young s Successive Overrelaxation Method Celebrating Fifty Years of David M. Young s Successive Overrelaxation Method David R. Kincaid Department of Computer Sciences University of Texas at Austin, Austin, Texas 78712 USA Abstract It has been

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Research Article An Inverse Eigenvalue Problem for Jacobi Matrices

Research Article An Inverse Eigenvalue Problem for Jacobi Matrices Mathematical Problems in Engineering Volume 2011 Article ID 571781 11 pages doi:10.1155/2011/571781 Research Article An Inverse Eigenvalue Problem for Jacobi Matrices Zhengsheng Wang 1 and Baoiang Zhong

More information

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Dr. ir. Gerrit C. Groenenboom 14 june - 18 june, 1993 Last revised: May 11, 2005 Contents 1 Introduction

More information

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University December 4, 2011 T.-M. Huang (NTNU) GMRES

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

Research Article MINRES Seed Projection Methods for Solving Symmetric Linear Systems with Multiple Right-Hand Sides

Research Article MINRES Seed Projection Methods for Solving Symmetric Linear Systems with Multiple Right-Hand Sides Mathematical Problems in Engineering, Article ID 357874, 6 pages http://dx.doi.org/10.1155/2014/357874 Research Article MINRES Seed Projection Methods for Solving Symmetric Linear Systems with Multiple

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

The Conjugate Gradient Method for Solving Linear Systems of Equations

The Conjugate Gradient Method for Solving Linear Systems of Equations The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion International Mathematics and Mathematical Sciences Volume 12, Article ID 134653, 11 pages doi:.1155/12/134653 Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion F. Soleymani Department

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Abstract and Applied Analysis Volume 013, Article ID 74815, 5 pages http://dx.doi.org/10.1155/013/74815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Research Article A Generalization of a Class of Matrices: Analytic Inverse and Determinant

Research Article A Generalization of a Class of Matrices: Analytic Inverse and Determinant Advances in Numerical Analysis Volume 2011, Article ID 593548, 6 pages doi:10.1155/2011/593548 Research Article A Generalization of a Class of Matrices: Analytic Inverse and Determinant F. N. Valvi Department

More information

A stable variant of Simpler GMRES and GCR

A stable variant of Simpler GMRES and GCR A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Research Article New Examples of Einstein Metrics in Dimension Four

Research Article New Examples of Einstein Metrics in Dimension Four International Mathematics and Mathematical Sciences Volume 2010, Article ID 716035, 9 pages doi:10.1155/2010/716035 Research Article New Examples of Einstein Metrics in Dimension Four Ryad Ghanam Department

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen Parallel Iterative Methods for Sparse Linear Systems Lehrstuhl für Hochleistungsrechnen www.sc.rwth-aachen.de RWTH Aachen Large and Sparse Small and Dense Outline Problem with Direct Methods Iterative

More information

The rate of convergence of the GMRES method

The rate of convergence of the GMRES method The rate of convergence of the GMRES method Report 90-77 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity International Mathematics and Mathematical Sciences Volume 2011, Article ID 249564, 7 pages doi:10.1155/2011/249564 Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity J. Martin van

More information

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations Applied Mathematics Volume 2012, Article ID 348654, 9 pages doi:10.1155/2012/348654 Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations M. Y. Waziri,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver Abstract and Applied Analysis Volume 01, Article ID 63893, 8 pages doi:10.1155/01/63893 Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver Mi Young Lee and Changbum

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Constrained Solutions of a System of Matrix Equations Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations International Mathematics and Mathematical Sciences Volume 212, Article ID 767328, 8 pages doi:1.1155/212/767328 Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving

More information

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.

More information

Research Article Solution of Fuzzy Matrix Equation System

Research Article Solution of Fuzzy Matrix Equation System International Mathematics and Mathematical Sciences Volume 2012 Article ID 713617 8 pages doi:101155/2012/713617 Research Article Solution of Fuzzy Matrix Equation System Mahmood Otadi and Maryam Mosleh

More information

Peter Deuhard. for Symmetric Indenite Linear Systems

Peter Deuhard. for Symmetric Indenite Linear Systems Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993) Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Preconditioned GMRES Revisited

Preconditioned GMRES Revisited Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32 Table of

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

Research Article A Note on Kantorovich Inequality for Hermite Matrices

Research Article A Note on Kantorovich Inequality for Hermite Matrices Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 0, Article ID 5767, 6 pages doi:0.55/0/5767 Research Article A Note on Kantorovich Inequality for Hermite Matrices Zhibing

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

Variants of BiCGSafe method using shadow three-term recurrence

Variants of BiCGSafe method using shadow three-term recurrence Variants of BiCGSafe method using shadow three-term recurrence Seiji Fujino, Takashi Sekimoto Research Institute for Information Technology, Kyushu University Ryoyu Systems Co. Ltd. (Kyushu University)

More information

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations Abstract and Applied Analysis Volume 212, Article ID 391918, 11 pages doi:1.1155/212/391918 Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations Chuanjun Chen

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 06-05 Solution of the incompressible Navier Stokes equations with preconditioned Krylov subspace methods M. ur Rehman, C. Vuik G. Segal ISSN 1389-6520 Reports of the

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Since the early 1800s, researchers have

Since the early 1800s, researchers have the Top T HEME ARTICLE KRYLOV SUBSPACE ITERATION This survey article reviews the history and current importance of Krylov subspace iteration algorithms. 1521-9615/00/$10.00 2000 IEEE HENK A. VAN DER VORST

More information

Research Article Strong Convergence of Parallel Iterative Algorithm with Mean Errors for Two Finite Families of Ćirić Quasi-Contractive Operators

Research Article Strong Convergence of Parallel Iterative Algorithm with Mean Errors for Two Finite Families of Ćirić Quasi-Contractive Operators Abstract and Applied Analysis Volume 01, Article ID 66547, 10 pages doi:10.1155/01/66547 Research Article Strong Convergence of Parallel Iterative Algorithm with Mean Errors for Two Finite Families of

More information

Research Article Some New Fixed-Point Theorems for a (ψ, φ)-pair Meir-Keeler-Type Set-Valued Contraction Map in Complete Metric Spaces

Research Article Some New Fixed-Point Theorems for a (ψ, φ)-pair Meir-Keeler-Type Set-Valued Contraction Map in Complete Metric Spaces Applied Mathematics Volume 2011, Article ID 327941, 11 pages doi:10.1155/2011/327941 Research Article Some New Fixed-Point Theorems for a (ψ, φ)-pair Meir-Keeler-Type Set-Valued Contraction Map in Complete

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information