Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Size: px
Start display at page:

Download "Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices"

Transcription

1 Journal of Computational Applied Mathematics 6 (009) Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization of the parameterized Uzawa preconditioners for saddle point matrices Zeng-Qi Wang State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics Scientific/Engineering Computing Academy of Mathematics Systems Science Chinese Academy of Sciences PO Box 719 Beijing PR China a r t i c l e i n f o a b s t r a c t Article history: Received 1 December 007 Received in revised form 6 February 008 Keywords: Saddle point problem Parameterized Uzawa preconditioner Optimal parameter The parameterized Uzawa preconditioners for saddle point problems are studied in this paper The eigenvalues of the preconditioned matrix are located in (0 ) by choosing the suitable parameters Furthermore we give two strategies to optimize the rate of convergence by finding the suitable values of parameters Numerical computations show that the parameterized Uzawa preconditioners can lead to practical effective preconditioned GMRES methods for solving the saddle point problems 008 Elsevier BV All rights reserved 1 Introduction Let A R m m be a symmetric positive definite matrix B R m n be a matrix of full column rank where m n Denote by B T the transpose of the matrix B Then the saddle point problem is of the form ( ) ( ) ( A B x b Az B T = f (1) 0 y q) where b R m q R n are two given vectors Such systems of linear equations (1) arise from many areas of scientific computing engineering applications such as mixed finite-element approximation of partial differential equations in elasticity fluid dynamics interior point sequential quadratic programming algorithms for optimization the solution of weighted least-squares problems the modeling of statistical processes; see [501] references therein It is widely recognized that effective Krylov iterations for saddle point problems depend crucially on good preconditioners (see [331]) such as incomplete factorization preconditioners [146] matrix splitting preconditioners (see [13]) The matrix splitting preconditioners are possibly obtained through the simple iterative methods (eg Jacobi symmetric Gauss Seidel (SGS) successive overrelaxation (SOR) symmetric successive overrelaxation (SSOR) preconditioners [ ]) or the alternating direction iteration methods (eg the Hermitian skew-hermitian (HSS) preconditioners [ ]) so on In this paper we present a new type of preconditioner which results from the parameterized Uzawa (PU) method studied in [7] as follows: Method 11 ([7] The PU Method for Saddle Point Problem) Let Q R n n be a symmetric positive definite matrix Given are initial vectors x (0) R m y (0) R n two relaxation factors ω τ with ω τ 0 For k = 0 1 until the iteration sequence {(x (k)t y (k)t ) T } converges to the exact solution of the saddle point problem (1) compute { x (k+1) = (1 ω)x (k) + ωa 1 (b By (k) ) y (k+1) = y (k) + τ Q 1 (B T x (k+1) q) Here Q is assumed to be an approximate (or preconditioning) matrix of the Schur complement matrix B T A 1 B Corresponding address: Department of Mathematics Shanghai Jiaotong University Shanghai 0040 PR China address: wangzengqi@sjtueducn /$ see front matter 008 Elsevier BV All rights reserved doi:101016/jcam

2 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) The PU method is a stationary iterative method based on the matrix splitting A = M(ω τ) N (ω τ) where 1 M(ω τ) ω A 0 B T 1 τ Q The corresponding iteration matrix is given by () H(ω τ) = I M(ω τ) 1 A where I is the identity matrix of suitable size When the relaxation factors ω τ satisfy 0 < ω < 0 < τ < ( ω) ω the spectral radius of H(ω τ) is less than 1 ie the PU method is convergent Here is the maximum eigenvalue of the matrix Q 1 B T A 1 B; see [7] In this paper we use the matrix M(ω τ) in () as a preconditioner for the system of linear equations (1) call it a parameterized Uzawa preconditioner or PU preconditioner in short Theoretical analyses show that the spectral distribution of the coefficient matrix in (1) is improved well by the PU preconditioner All the eigenvalues of the preconditioned matrix are located in the interval (0 ) when the parameters ω τ satisfy (4) Moreover there are quite a number of eigenvalues clustered around a point To further improve the conditioning of the coefficient matrix A in (1) we give two strategies for optimizing the preconditioner On the premise of confining the smallest eigenvalue away from the origin the optimal parameters are chosen to minimize the measurement of the objective intervals of the spectrum Although the convergence of nonsymmetric problems has no clear relationship with the eigenvalues when the Krylov subspace methods such as GMRES are performed intuitively the tight distribution of the eigenvalues (away from the origin) often results in rapid convergence [193] We use numerical results to show the effectiveness of the PU preconditioners the corresponding preconditioned GMRES iteration methods The paper is organized as follows After introducing the PU preconditioner M(ω τ) we analyze the spectral distribution of the preconditioned matrix M(ω τ) 1 A in Section Strategies corresponding parameters for optimizing the preconditioning matrix are studied in Section 3 numerical results are shown in Section 4 Finally we end the paper with a brief conclusion The PU preconditioner When the matrix M(ω τ) in () is used as a preconditioner for the saddle point problem (1) the spectral distribution of the preconditioned matrix M(ω τ) 1 A can be analyzed easily by (3) the following lemma Lemma 1 ([7]) Let A R m m be symmetric positive definite B R m n be of full column rank Q R n n be nonsingular symmetric Denote by µ an eigenvalue of the matrix J = Q 1 B T A 1 B Then the nonzero eigenvalues of the matrix H(ω τ) are given by λ = 1 ω λ = 1 ( ω τωµ ± ) ( ω τωµ) 4(1 ω) (3) (4) Furthermore it can be proved that λ = 1 ω is an eigenvalue of multiplicity at least m n zero is not the eigenvalue of H(ω τ) if ω 1 Consequently we get the following theorem Theorem 1 Let A R m m be symmetric positive definite B R m n be of full column rank Q R n n be nonsingular symmetric Denote by µ an eigenvalue of the matrix J = Q 1 B T A 1 B Then the eigenvalues of M(ω τ) 1 A denoted by λ are given by λ = ω or λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ Moreover there are at least m n eigenvalues which are equal to ω Proof From (3) the eigenvalues of M(ω τ) 1 A the eigenvalues of the iteration matrix H(ω τ) have the relationship λ = 1 λ The results of this theorem can be straightforwardly deduced from (5) Lemma 1 (5)

3 138 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Moreover all eigenvalues of M(ω τ) 1 A are located in the disk {z C z 1 < 1} when the parameters ω τ satisfy (4) The eigenvalues of the preconditioned matrix fall into two categories: one is ω the other is conditionally real or complex which is dependent on the discriminant := (µ) = ω (1 + τµ) 4ωτµ where µ [ ] are the minimum the maximum eigenvalues of the matrix Q 1 B T A 1 B respectively After straightforward derivation we have the following results: (F a ) When ω 1 for any τ µ 0 ie λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ is real all the eigenvalues of M(ω τ) 1 A are real; (F b ) When ω < 1 τ ω + 1 ω ωµ or τ ω 1 ω ωµ for any µ [ ] it holds that 0 so that λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ is real Hence all the eigenvalues of M(ω τ) 1 A are real; (F c ) When ω < 1 for ω 1 ω < τ < ω+ 1 ω < 0 ωµ ωµ λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ is complex Hence there are complex eigenvalues for M(ω τ) 1 A Define the functions: (a) f 1 (µ ω τ) = 1 [(ω + τωµ) + ω (1 + τµ) 4ωτµ] (b) f (µ ω τ) = 1 [(ω + τωµ) ω (1 + τµ) 4ωτµ] (c) f 3 (µ ω τ) = τωµ We first analyze the monotonicity of these functions According to the monotonicity we then define two intervals I1(ω τ) I(ω τ) The real spectrum of the preconditioned matrix lies in I1(ω τ) I(ω τ) except for λ = ω Theorem Consider the preconditioned matrix M(ω τ) 1 A in which the parameters ω τ satisfy (4) Then (i) The real eigenvalues of the preconditioned matrix satisfy 0 < λ < all the eigenvalues are located in the unit disk {λ C; λ 1 < 1}; (ii) When ω 1 all the eigenvalues of the preconditioned matrix are real Moreover these eigenvalues are located in the union of the intervals where (iii) When I1(ω τ) I(ω τ) {ω} I1(ω τ) = [f 1 ( ω τ) f 1 ( ω τ)] I(ω τ) = [f ( ω τ) f ( ω τ)]; ω < 1 τ ω + 1 ω ω the eigenvalues of the preconditioned matrix are all real Moreover they are located in the union of the intervals where I1(ω τ) I(ω τ) {ω} I1(ω τ) = [f 1 ( ω τ) f 1 ( ω τ)]

4 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) (iv) When I(ω τ) = [f ( ω τ) f ( ω τ)]; ω < 1 τ ω 1 ω ω the eigenvalues of the preconditioned matrix are all real Moreover these eigenvalues are located in where (v) When I1(ω τ) I(ω τ) {ω} I1(ω τ) = [f 1 ( ω τ) f 1 ( ω τ)] I(ω τ) = [f ( ω τ) f ( ω τ)]; ω < 1 ω 1 ω < τ < ω + 1 ω ω ω the conjugate complex eigenvalues of the preconditioned matrix exist These complex eigenvalues satisfy [ ω(1 + τµmin ) R(λ) ω(1 + τµ ] max) λ [ ωτµmin ] ωτ where R( ) denotes the real part of the corresponding complex number Proof From Theorem 1 we know that the spectral set of the preconditioned matrix M(ω τ) 1 A consists of the following two types of eigenvalues: λ = ω λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ We can obtain the results in (i) straightforwardly since ρ(h(ω τ)) < 1 when ω τ satisfy (4) According to (F a ) all the eigenvalues λ are real when ω 1 In this case f 1 (µ ω τ) f (µ ω τ) are both monotonically increasing functions with respect to the variable µ hence (ii) holds true When ω < 1 τ ω + 1 ω ωµ f 1 (µ ω τ) is an increasing function while f (µ ω τ) is a decreasing function with respect to µ so (iii) holds true When ω < 1 τ ω 1 ω ω f 1 (µ ω τ) is a decreasing function while f (µ ω τ) is an increasing function with respect to µ so (iv) holds true When ω 1 ω 1 ω < τ < ω + 1 ω ω ω according to (F c ) the complex eigenvalues λ = 1 [(ω + τωµ) ± ω (1 + τµ) 4ωτµ] = 1 [(ω + τωµ) ± i 4ωτµ ω (1 + τµ) ] exist It is easy to see that the real part of λ is monotonically increasing with respect to µ is bounded as ω(1 + τ ) R(λ) ω(1 + τ)

5 140 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) λ is given by λ = 1 4 [(ω + τωµ) + 4ωτµ ω (1 + τµ) ] = ωτµ It is bounded as τω λ τω Now the theorem is proved When the spectrum is real some Krylov subspace methods become more attractive because of the short recurrence see [8131] Hence in the following we only consider the cases that all the eigenvalues of M(ω τ) 1 A are real In those cases the eigenvalues of the preconditioned matrix are located in (0 ) with the corresponding parameters ω τ In the next section we want to improve the conditioning of the preconditioned matrix by further selecting the parameters 3 Strategies for optimizing the preconditioner In this section we present two strategies to optimize the preconditioning matrix compute the corresponding optimal parameters for the PU preconditioners under these strategies To avoid confusion we emphasize that the optimal parameters for the PU preconditioning matrix may be different from the optimal parameters for the PU iteration method; see [7] We denote the measurements of the intervals I1(ω τ) I(ω τ) by I1(ω τ) I(ω τ) respectively In the following two strategies we improve the conditioning of the coefficient matrix in the aspects: (i) Compress the distribution of the eigenvalues; (ii) Ensure that the eigenvalues are away from the origin Strategy A Compress the eigenvalue distribution by reducing I(ω τ) = max{ I1(ω τ) I(ω τ) } The parameter pair of {ω opt τ opt } is the solution of the minimization problem st min ωτ I(ω τ) min µ f (µ ω τ) ε where min µ f (µ ω τ) is the minimum eigenvalue of M(ω τ) 1 A Strategy B Compress the eigenvalue distribution by reducing the measurement Ĩ(ω τ) = [min µf (µ ω τ) max µf 1 (µ ω τ)] The parameter pair of {ω (opt) τ (opt) } is the solution of the minimization problem st min Ĩ(ω τ) ωτ min f (µ ω τ) ε µ where min µ f (µ ω τ) is the minimum eigenvalue of M(ω τ) 1 A The constraints in (6) (7) are used to guarantee that the eigenvalues of the preconditioned matrix are away from zero For certain saddle point problem ε is a constant less than 1 in general Theorem 31 In different cases of ω τ the function I(ω τ) is expressed as: (i) When ω 1 (i1) For τ (i) For τ < ( ω) ( + )ω I(ω τ) = I1(ω τ) = 1 ( ωτ( ) + ω (1 + τ ) 4τω ( ω) ( + )ω ω (1 + τ ) 4τω ) ; (8) I(ω τ) = I(ω τ) = 1 ( ωτ( ) + ω (1 + τ ) 4τω ) ω (1 + τ ) 4τω (9) (6) (7)

6 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) (ii) When ω < 1 (ii1) For τ ω 1 ω ω I(ω τ) = I(ω τ) = 1 ( ωτ( ) + ω (1 + τ ) 4τω (ii) For τ ω+ 1 ω ω ω (1 + τ ) 4τω ) ; (10) I(ω τ) = I1(ω τ) = 1 ( ωτ( ) + ω (1 + τ ) 4τω ) ω (1 + τ ) 4τω (11) Proof When ω 1 it holds that I1(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) I(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) By straightforward calculations for τ for τ < ( ω) we have ( + )ω ω (1 + τ ) 4τω ω (1 + τ ) 4τω ; ( ω) we have ( + )ω ω (1 + τ ) 4τω < ω (1 + τ ) 4τω The result then follows directly from the above equations When ω 1 all the eigenvalues are real if only if τ ω 1 ω or τ ω + 1 ω ω ω We first discuss the case of τ ω 1 ω ω It holds that I1(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) I(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) It is clear that I(ω τ) I1(ω τ) For the case of τ ω+ 1 ω ω it holds that I1(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) I(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) It is clear that I1(ω τ) I(ω τ)

7 14 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Theorem 3 Consider the PU preconditioning optimization with Strategy A Let A R m m be symmetric positive definite B R m n be of full column rank Q R n n be symmetric positive definite Denote the smallest the largest eigenvalues of the matrix J = Q 1 B T A 1 B by the condition number by κ = Let the constant ε be less than κ Then when < ε the optimal parameters are κ+1 κ ω opt = 1 τ opt = ε the corresponding minimum measurement of the interval is I(ω opt τ opt ) = κε 1 When ε the optimal parameters are κ+1 ω opt = 4(1 ε) + ε (1 + κ) ε( ε) τ opt = ε(1 κ) [ε (κ + 1) + 4(1 ε)] the corresponding minimum measurement of the interval is I(ω opt τ opt ) = ε( ε)(κ 1) ε + εκ Proof In order to demonstrate the results conveniently we define the following variables: τ (ε) = ε(ω ε) (1 ε)ω ω(0) = κε 4ε + 4 κε ε + We declare that f (ω τ ) ε if only if τ τ (ε) τ (ε) ( ω) ω if only if ω ω (0) So it is reasonable to restrict our discussion within the scope of ω ω (0) We are going to fulfill the proof according to the following three cases with respect to ω τ Case (a) ω 1 κε For this case according to (F a ) all the eigenvalues of the preconditioned matrix are real for any τ It is clear that the lower bound of I(ω τ) is f ( ω τ) from Theorem To the end of satisfying the constraint in (6) we request f ( ω τ) ε Hence τ must satisfy τ τ (ε) Furthermore the condition κε is necessary When κε > it holds that ω (0) < 1 It contradicts with ω 1 ω ω (0) According to Strategy A we want to minimize the function I(ω τ) From Theorem 31 { I1(ω τ) for τ τ I(ω τ) = I(ω τ) for τ < τ where τ = Denote by ( ω) ( + )ω ˆω = 4(1 ε) + ε (1 + κ) ε(1 κ) Then it holds that ω (0) ˆω = ε(1 ε)( ε) (κε ε + )(κε ε + ) > 0 τ (ε) τ for ε 1 ω ˆω κ + 1 τ (ε) > τ for ε ˆω < ω < ω(0) κ + 1 or κ + 1 < ε < κ 1 ω < ω(0)

8 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) We now prove Case (a) with the following two cases: (a1) κ+1 < ε κ 1 ω < ω(0) Since τ (ε) > τ we consider the case τ (ε) τ < only It holds that ( ω) ω I1(ω τ) > I(ω τ) We want to minimize the function I(ω τ) = I1(ω τ) see (8) Since ( ) ( ) is a monotonically increasing function with respect to τ we declare that I(ω τ) is a monotonically increasing function with respect to τ too Hence I(ω τ) attains its minimum at τ (1) := τ (ε) = 1 ε(ω ε) ω(1 ε) Substituting τ by τ (1) in (8) we know that I(ω τ (1) ) is an increasing function with respect to ω it achieves its minimum at ω (1) = 1 Correspondingly we have τ (1) = ε I (1) := I(ω (1) τ (1) ) = κε 1 (a) ε 1 ω < κ+1 ω(0) (i) When ˆω < ω < ω (0) we have τ (ε) > τ We consider the case τ (ε) τ < ( ω) ω The analysis is similar to (a1) Since I1(ω τ) > I(ω τ) We want to minimize the function I(ω τ) = I1(ω τ) in (8) Since ( ) ( ) is a monotonically increasing function with respect to τ we declare that I(ω τ) is a monotonically increasing function with respect to τ too Hence I(ω τ) attains its minimum at τ () := arg min τ(i(ω τ)) = τ (ε) = 1 ε(ω ε) ω(1 ε) Substituting τ by τ () in (8) we see that I(ω τ () ) is a monotonically increasing function with respect to ω it attains the minimum at ω () = ˆω Correspondingly we obtain τ () = τ (ε) (ω () ε( ε) ) = [ε (κ + 1) + 4(1 ε)] I () := I(ω () τ () ε( ε)(κ 1) ) = ε + εκ (ii) When 1 ω ˆω we have τ (ε) τ Consider the domain τ (ε) ( ω) τ < ω (1) In the case ( ω) τ τ < ω we have I1(ω τ) I(ω τ) Now the analysis is similar to (a1) We want to minimize the function I(ω τ) = I1(ω τ) Since ( ) ( ) is a monotonically increasing function with respect to τ we declare that I(ω τ) is a monotonically increasing function with respect to τ too Hence I(ω τ) attains its minimum at τ (3) := τ Therefore I(ω (3) τ (3) ) = I(ω τ) = ( ω)(κ 1) κ + 1

9 144 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) As I(ω τ) is a decreasing function with respect to ω its minimum is attained at ω (3) = ˆω Correspondingly we have τ (3) ε( ε)(1 + κ) = ( + )[4(1 ε) + ε (1 + κ)] I (3) ε( ε)(κ 1) = ε + εκ () In the case τ (ε) τ < τ we have I(ω τ) > I1(ω τ) We are going to minimize I(ω τ) = I(ω τ) in (9) We introduce the auxiliary variable ˆτ = ωτ Then I(ω τ) = 1 (ˆτ( ) + (ω + ˆτ ) 4ˆτ (ω + ˆτ ) 4ˆτ ) It is easy to verify that I(ω τ) is a decreasing function with respect to the variable ω Therefore it achieves the minimum at ω (4) = ˆω When ω = ˆω it holds that τ (ε) = τ ˆωτ (ε) = ˆω τ Hence I(ω (4) τ) = I(ω (4) ˆτ) achieves the minimum I (4) ε( ε)(κ 1) = ε + εκ at ˆτ = ˆωτ (ε) = ˆω τ ie τ (4) = τ (ε) = τ = ε( ε)(1 + κ) ( + )[4(1 ε) + ε (1 + κ)] Case (b) ω 1 τ ω+ 1 ω ω We declare that Case (b) is meaningful only when κ If κ > then ω + 1 ω ( ω) > ω ω holds true there is no τ satisfying τ (0 ( ω) ω ) When κ f (µ ω τ) > ε holds for any eigenvalue µ since τ ω + 1 ω ε(ω ε) > ω ω(1 ε) In this case I1(ω τ) I(ω τ) We are going to minimize I(ω τ) = I1(ω τ) in (11) Since (f 1 ( ω τ) f 1 ( ω τ)) τ I(ω τ) attains its minimum at τ (5) = ω + 1 ω ω 0 Moreover since f 1 ( ω τ (5) ) f 1 ( ω τ (5) ) is a monotonically decreasing function with respect to ω we get the optimal parameter ω (5) = 1 Correspondingly we have τ (5) = 1 I (5) := I(ω (5) τ (5) ) = κ 1 Case (c) ω 1 τ ω 1 ω ω

10 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) For this case I1(ω τ) I(ω τ) We minimize the function I(ω τ) = I(ω τ) see (10) We introduce the auxiliary parameter ˆτ = ωτ Then ˆτ ω 1 ω f (ω ˆτ µ) := f (ω τ µ) = 1 (ω + ˆτµ (ω + ˆτµ) 4ˆτµ) By straightforward calculation we have Hence f (ω ˆτ µ) µ ω = ˆτ (ω ˆτµ) < 0 3 (f (ω ˆτ ) f (ω ˆτ )) ω or equivalently I(ωτ) ω ω (6) = 1 Substituting ω (6) into (10) we get < 0 < 0 We get the optimal parameter of ω in this case as follows: I(ω τ) = τ( ) = τ( ) Clearly I(1 τ) is a monotonically increasing function with respect to τ We begin to discuss the cases κε 1 κε > 1 For the former since ε ˆτ 1 the corresponding optimal parameter τ is τ (6) = ε the corresponding minimizing function is I (6) = ε(κ 1) When κε > 1 τ (ε) > ω 1 ω ω So there is no suitable τ satisfying (4) Now we summarize Cases (a) (c) The optimal parameters ω τ depend on κ ε strongly When κ+1 < ε κ we choose the optimal pair of the parameters from (ω (1) τ (1) ) (ω (5) τ (5) ) From straightforward calculation we get I (5) > I (1) Therefore for this case ω opt = ω (1) = 1 τ opt = τ (1) = ε the minimum of I(ω τ) is I opt = I (1) = κε 1 When ε κ+1 we choose the optimal pair of the parameters from (ω() τ () ) (ω (3) τ (3) ) (ω (4) τ (4) ) (ω (5) τ (5) ) (ω (6) τ (6) ) The corresponding values of the function I(ω τ) are I () I (3) I (4) I (5) I (6) Obviously I () is the minimum Therefore for this case ω opt = ω () = 4(1 ε) + ε (1 + κ) τ opt = τ () ε( ε) = ε(1 κ) [ε (κ + 1) + 4(1 ε)] the minimum of I(ω τ) is I opt = I () = ε( ε)(κ 1) ε + εκ

11 146 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Theorem 33 Consider the PU preconditioning optimization with Strategy B Let A R m m be symmetric positive definite B R m n be of full column rank Q R n n be symmetric positive definite Denote the smallest the largest eigenvalues the condition number of the matrix J = Q 1 B T A 1 B by κ = Let the constant ε in (6) be less than κ Then when κε 1 the optimal parameters are ω (opt) = 1 τ (opt) = 1 min Ĩ(ω τ) = 1 1 κ When 1 < κε the optimal parameters are ω (opt) = 1 τ (opt) = ε min Ĩ(ω τ) = ε(κ 1) Proof When κε it always holds that τ (ε) < ( ω) ω We are going to fulfill the proof according to the following three cases with respect to the parameters ω τ Case (a) 1 ω < τ (ε) τ < ( ω) ω For this case f 1 (ω τ µ) f (ω τ µ) are both monotonically increasing functions with respect to the variable µ According to Strategy B we are going to minimize the measurement Ĩ(ω τ) = [f ( ω τ) f 1 ( ω τ)] We replace ωτ by the auxiliary variable ˆτ Then the measurement of the interval Ĩ(ω τ) is a function of the variables ω ˆτ ie Ĩ(ω τ) = 1 [ˆτ( ) + (ω + ˆτ ) 4ˆτ + (ω + ˆτ ) 4ˆτ ] (1) It is obvious that Ĩ(ω τ) is a monotonically increasing function with respect to the variable ω We fix the optimal parameter ω at ω (a) = 1 Then (1) is simplified to be Ĩ(1 τ) = 1 (ˆτ( ) + ˆτ 1 + ˆτ 1 ) It is easy to verify that (a1) when ˆτ 1 Ĩ(1 τ) = 1 ˆτ 1 1 κ the equality holds when τ = 1 ; (a) when 1 ˆτ < 1 Ĩ(1 τ) = ˆτ( ) 1 1 κ the equality holds when τ = 1 ; (a3) when ˆτ 1 Ĩ(1 τ) = ˆτ 1 κ 1 the equality holds when τ = 1

12 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) We carry on our discussion under the constraint (7) ie τ τ (ε) When κε > 1 it holds that ˆτ (ε) = τ (ε) 1 Hence we omit case (a1) only consider Cases (a) (a3) It is clear that Ĩ(1 τ) achieves its minimum at min τ Ĩ(1 τ) = ε(κ 1) τ (a) = τ (ε) = ε When κε 1 ˆτ (ε) = τ (ε) 1 all these three cases exist Ĩ(1 τ) achieves its minimum at min Ĩ(1 τ) = 1 1 τ κ τ (a) = 1 Case (b) ω 1 τ ω+ 1 ω ω We declare that this case exists only when κ Otherwise it holds that τ ω + 1 ω ( ω) > ω ω which is incompatible as 0 < τ < ( ω) ω In Case (b) f 1 (ω τ µ) is a monotonically increasing function with respect to the variable µ as long as f (ω τ µ) is the monotonically decreasing function with respect to the variable µ According to Strategy B we are going to minimize the function Ĩ(ω τ) = f 1 ( ω τ) f ( ω τ) We replace ωτ by the auxiliary variable ˆτ Then the measurement of the interval Ĩ(ω τ) is a function of the variables ω ˆτ it satisfies Ĩ(ω ˆτ) = (ω + ˆτ ) 4ˆτ For this case Ĩ(ω ˆτ) is a monotonically increasing function with respect to the variable ˆτ We fix the parameter ˆτ at ˆτ (b) = ω + 1 ω substitute ˆτ (b) into the expression of Ĩ(ω ˆτ) It is obvious that Ĩ(ω ˆτ (b) ) is a monotonically decreasing function with respect to the variable ω So the optimal parameters for this case are ω (b) = 1 τ (b) = 1 the corresponding measurement of the interval is min Ĩ(ω τ) = κ 1 ωτ Case (c) ω 1 τ ω 1 ω ω For this case f 1 (ω τ µ) is a monotonically decreasing function while f (ω τ µ) is a monotonically increasing function with respect to the variable µ According to Strategy B we are going to minimize the measurement of the interval Ĩ(ω τ) = [f ( ω τ) f 1 ( ω τ)] We replace ωτ by the auxiliary variable ˆτ Then the measurement of the interval Ĩ(ω τ) is a function of the variables ω ˆτ it satisfies Ĩ(ω ˆτ) = (ω + ˆτ ) 4ˆτ For this case Ĩ(ω ˆτ) is a monotonically decreasing function with respect to the variable ˆτ We fix the parameter ˆτ at ˆτ (c) = ω 1 ω

13 148 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) substitute ˆτ (c) into the expression of Ĩ(ω ˆτ) It can be verified that Ĩ(ω ˆτ (c) ) is increasing with respect to ω when 0 < ω κ 10κ + (κ + ) κ(κ + 8) (κ 1) decreasing with respect to ω when κ 10κ + (κ + ) κ(κ + 8) (κ 1) < ω 1 So ω = 1 is a local minimum point We abon another local minimum point say the zero since the preconditioned matrix will be near singular when ω 0 When ω = 1 τ (ε) 1 if only if κε 1 We consider this case under the assumption κε 1 The optimal parameters for this case are ω (c) = 1 τ (c) = 1 the corresponding interval measurement is min ωτ Ĩ(ω τ) = 1 1 κ By summarizing the aforementioned cases we draw the following conclusion: When κε 1 ω (opt) = 1 τ (opt) = 1 min ωτ Ĩ(ω τ) = 1 1 κ When 1 < κε < ω (opt) = 1 τ (opt) = ε min Ĩ(ω τ) = ε(κ 1) ωτ Remark 1 The efficiency of Strategies A B strongly depends on the condition number of the preconditioned Schur matrix J In other words Q should be a good approximation to B T A 1 B Several approximations were suggested in [ ] Especially for the Stokes problem the pressure mass matrix will be a reliable cidate see [] As revealed in the last two theorems the condition number κ of the matrix J is closely related to the spectral distribution of the preconditioned matrix the constant ε the optimal parameters Remark The optimal relaxation factors of the PU iteration method in [7] are ω = 4 κ ( τ 1 = (13) κ + 1) µmax They are different from the parameters chosen by either Strategy A or Strategy B With the parameters ω τ the eigenvalues of the preconditioned matrix are 4 κ λ = ( λ = 1 κ + 1) (ω + ω τ µ ± i 4ω τ µ (ω + ω τ µ) ) The real parts R(λ) of these eigenvalues are in the range of [ ] 1 + κ κ 1 + κ the moduli of the eigenvalues λ are in the range of [ ] 1 + κ κ 1 + κ We refer to [3] for some practical techniques that can be used to iteratively compute the optimal parameters of the relaxed splitting methods such as the SOR

14 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Table 1 The corresponding parameters for Example 41 ε = ε 1 ε = ε ε = ε 3 N ε ω τ ω τ ε P ω τ ω τ ε ω τ ω τ ω PGMRES-C τ Numerical result In this section we use examples to further examine the effectiveness of the parameterized Uzawa preconditioners for solving the saddle point problems (1) from the aspects of number of iteration steps (denoted by IT ) elapsed CPU time in seconds (denoted by CPU ) norm of relative residual vectors (denoted by RES ) Here Res is defined by b Ax (k) B RES := T y (k) + q Bx (k) b + q with (x (k)t y (k)t ) T being the current approximate solution In our computation all runs of the Krylov subspace methods are started from the initial vector (x (0)T y (0)T ) = 0 terminated if the current iterations satisfy RES 10 7 or if the numbers of the prescribed iteration κ max = 500 are exceeded To investigate the influence of ε in (6) (7) on Strategy A Strategy B we select the constant ε in different intervals namely ε 1 (0 1 κ ) ε [ 1 κ ] ε κ+1 3 [ ] where κ κ+1 κ is the condition number of the matrix Q 1 B T A 1 B The optimal parameters ω τ are acquired according to Theorems 3 33 subsequently We denote the PU preconditioned GMRES methods as PGMRES-C since the ω τ in them are advised by Strategy A Strategy B Remark respectively In the PGMRES-tri method the Tri-diagonal preconditioner ( ) A 0 M(ω τ) B T (14) Q is used It is a special PU preconditioner with ω = 1 τ = 1; see [1448] We compare these methods with GMRES without preconditioning for each example The first example is generated by running the Incompressible Flow Iterative Solution Software (IFISS) introduced in [] Example 41 Consider the Stokes equation { u + p = 0 u = 0 in the square domain Ω = ( 1 1) with the natural outflow boundary condition u np = s n on Ω We discretize the Stokes equation by Q Q 1 approximation obtain the linear system (1) The approximate matrix Q is the positive definite pressure mass matrix generated by the mix-element discretization In Table 1 we list the optimal parameters in Theorems 3 33 Remark for the different choices of ε For the different problem scales N εi (i = 1 3) (the midpoints of the corresponding intervals) ω τ of the strategies are quite stable due to the advisable choice of Q In Table we list the numerical results in terms of IT CPU RES for testing methods for Example 41 with different sizes of problems From this table we see that all the PU preconditioned GMRES methods are faster than the GMRES method without preconditioning In most of the cases PGMRES-tri all outperform PGMRES- C The performance of is comparable with PGMRES-tri when ε = ε 3 Compared to PGMRES-tri has no distinct advantage for this example The reason is that we choose a very effective Q to approximate B T A 1 B

15 150 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Table IT CPU RES with different ε for Example 41 ε = ε 1 ε = ε ε = ε 3 N IT CPU RES 330e 08 37e e 08 33e 08 IT CPU RES 96e e e e 08 IT CPU RES 631e e e e 08 IT CPU RES 56e 08 56e e 08 50e 09 IT CPU RES 84e 09 45e e e 08 IT CPU RES 56e 08 56e e 08 50e 09 IT PGMRES-C CPU RES 869e e e e 08 IT PGMRES-tri CPU RES 800e e 09 90e 09 55e 08 IT GMRES CPU RES 370e e e e 05 Fig 1 The spectrum of the coefficient matrix (left) the Preconditioned matrix with parameters in (13) (right) for Example 41 (N = 187) so that all the eigenvalues of Q 1 B T A 1 B are located in [0 ] the behavior of the preconditioner is not very sensitive about τ We plot the eigenvalues of the coefficient matrix the preconditioned matrices in Figs 1 In terms of the spectral distribution of the PU preconditioned method Strategy B performs better than Strategy A in the case of ε 1 = 005 they both outperform PGMRES-C with parameters ω τ since there are a number of complex eigenvalues in curve PU-C The distribution of eigenvalues affects the preconditioning performance This is coincident with the result in Table Example 4 ([715]) Consider the augmented linear system (1) in which ( ) I T + T I 0 A = R 0 I T + T p p B = ( I F F ) I R p p I

16 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Fig The spectrum of the PU preconditioned matrix (N = 187) for Example 41 PU-A: PU preconditioner of Strategy A; PU-B: PU preconditioner of Strategy B; PU-tri: PU preconditioner in (14) Table 3 Choices of the matrix Q Case no Matrix Q Description I B T Â 1 B Â = tridiag(a) II B T Â 1 B Â = diag(a) Table 4 Corresponding parameters of Case I for Example 4 N Case I ε ω τ ε = ε 1 ω τ ε = ε ε = ε 3 ε ω τ ω τ ε ω τ ω τ ω PGMRES-C τ T = 1 h tridiag( 1 1) Rp p F = 1 h tridiag( 1 1 0) Rp p with being the Kronecker product symbol h = 1 the discretization mesh size p+1 For this example we have m = p n = p Hence the total number of variables is m + n = 3p We choose the matrix Q the approximation to the matrix B T A 1 B as the cases listed in Table 3 In Tables 4 5 we list the optimal parameters of Strategy A Strategy B the optimal relaxation factors given in [7] for various problem sizes (m n) approximate matrices Q for Example 4 The corresponding numerical results are listed in Table 6 Table 7 In the sense of iteration step CPU time is faster than other preconditioned GMRES methods for each case of Q ε In the case of ε = ε 3 the performance of is comparable with better than PGMRES-tri while in the other cases PGMRES-tri is faster than PGMRES-C All of these PU

17 15 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Table 5 Corresponding parameters of Case II for Example 4 N Case II ε ω τ ε = ε 1 ω τ ε = ε ε = ε 3 ε ω τ ω τ ε ω τ ω τ ω PGMRES-C τ Table 6 IT CPU RES of Case I for Example 41 ε = ε 1 ε = ε ε = ε 3 N IT CPU RES 543e e e e e 08 IT CPU RES 95e e e e e 08 IT CPU RES 955e e 08 60e e e 08 IT CPU RES 557e e e e e 08 IT CPU RES 97e 08 70e 08 70e 08 75e e 08 IT CPU RES 557e e e e e 08 IT PGMRES-C CPU RES 716e e e e e 08 IT PGMRES-tri CPU RES 10e e e e e 08 preconditioned methods are efficient than the GMRES method without preconditioning From Figs 3 4 we see that the condition of the original problem is much worse than the preconditioned system As far as the spectral distribution is concerned the strategies for preconditioning optimization are successful From these examples we find that in the case of ε [ ] the Strategy A is much effective than the other cases κ+1 κ Correspondingly ω = 1 is obtained the difference between Strategy A Strategy B depends on the choice of τ The performance of two strategies are similar when the PU preconditioner is not dependent on τ sensitively 5 Conclusion remarks In recent years quite a few structured preconditioners have been studied for saddle point problems eg the Hermitian skew-hermitian splitting preconditioners in [ ] the constraint preconditioners in [5] the restrictive preconditioners in [518] so on Initially the HSS method was used as a stationary iterative method for non-hermitian positive definite systems in [9115] the optimal parameters for the stationary iteration are found to accelerate the iteration [8915] But the work of finding the parameters for optimizing the preconditioning is more difficult [80] In [30] Simoncini Benzi presented that the eigenvalues of the preconditioned matrix are clustered when the parameter

18 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Table 7 IT CPU RES of Case II for Example 41 ε = ε 1 ε = ε ε = ε 3 N IT CPU RES 738e e 08 81e e e 08 IT CPU RES 360e e e e e 08 IT CPU RES 39e e 08 91e e e 08 IT CPU RES 460e e 08 63e e e 08 IT CPU RES 839e 08 94e e e e 08 IT CPU RES 460e 08 94e 08 63e e e 08 IT PGMRES-C CPU RES 515e 08 36e e 08 94e e 08 IT PGMRES-tri CPU RES 393e 08 97e e e e 08 IT GMRES CPU RES 437e e e e 08 14e 06 Fig 3 The spectrum of the coefficient matrix (left) the Preconditioned matrix with parameters in (13) (right) for Example 4 (N = 108) α 0+ Unfortunately the near singularity of the preconditioned matrix accompanies the clustering result so that the Krylov subspace methods converge slowly In this paper the parameters of the preconditioners are chosen so that the eigenvalues of the preconditioned matrix have a good distribution We consider the eigenvalue clustering by compressing the distribution of eigenvalues as well as by constraining the lower bound of the eigenvalues The motivation behind constraining is to ensure that all the eigenvalues of the preconditioned matrix are away from the origin The strategy may be extended to choose the iteration parameters involved in the HSS [1] the NSS 1 [13] the PSS [11] the BTSS 3 [11] iteration methods etc 1 NSS is the abbreviation of the term normal skew-hermitian splitting PSS is the abbreviation of the term positive definite skew-hermitian splitting 3 BTSS is the abbreviation of the term block triangular skew-hermitian splitting

19 154 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) Fig 4 The spectrum of the PU preconditioned matrix (N = 108) for Example 4 PU-A: PU preconditioner of Strategy A; PU-B: PU preconditioner of Strategy B; PU-tri: PU preconditioner in (14) References [1] O Axelsson Iterative Solution Methods Cambridge Univ Press Cambridge 1994 [] Z-Z Bai Structured preconditioners for nonsingular matrices of block two-by-two structures Math Comput 75 (006) [3] Z-Z Bai X-B Chi Asymptotically optimal successive overrelaxation methods for systems of linear equations J Comput Math 1 (003) [4] Z-Z Bai IS Duff AJ Wathen A class of incomplete orthogonal factorization methods I: Methods theories BIT 41 (001) [5] Z-Z Bai G-Q Li Restrictively preconditioned conjugate gradient methods for systems of linear equations IMA J Numer Anal 3 (003) [6] Z-Z Bai G-Q Li L-Z Lu Combinative preconditioners of modified imcomplete Cholesky factorization Sherman Morrison Woodbury update for self-adjoint elliptic Dirichlet-periodic boundary value problems J Comput Math (004) [7] Z-Z Bai BN Parlett Z-Q Wang On generalized successive overrelaxation methods for augmented linear systems Numer Math 10 (005) 1 38 [8] Z-Z Bai GH Golub Accelerated Hermitian skew-hermitian splitting iteration methods for saddle-point problems IMA J Numer Anal 7 (007) 1 3 [9] Z-Z Bai GH Golub C-K Li Optimal parameter in Hermitian skew-hermitian splitting method for certain two-by-two block matrices SIAM J Sci Comput 8 (006) [10] Z-Z Bai GH Golub C-K Li Convergence properties of preconditioned Hermitian skew-hermitian splitting methods for non-hermitian positive semidefinite matrices Math Comp 76 (007) [11] Z-Z Bai GH Golub L-Z Lu J-F Yin Block triangular skew-hermitian splitting methods for positive definite linear systems SIAM J Sci Comput 6 (005) [1] Z-Z Bai GH Golub MK Ng Hermitian skew-hermitian splitting methods for non-hermitian positive definite linear systems SIAM J Matrix Anal Appl 4 (003) [13] Z-Z Bai GH Golub MK Ng On successive-overrelaxation acceleration of the Hermitian skew-hermitian splitting iterations Numer Linear Algebra Appl 14 (007) [14] Z-Z Bai MK Ng On inexact preconditioners for nonsymmetric matrices SIAM J Sci Comput 6 (005) [15] Z-Z Bai GH Golub J-Y Pan Preconditioned Hermitian skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems Numer Math 98 (004) 1 3 [16] Z-Z Bai J-C Sun D-R Wang A unified framework for the construction of various matrix multisplitting iterative methods for large sparse system of linear equations Comput Math Appl 3 (1996) [17] Z-Z Bai C-L Wang On the convergence of nonstationary multisplitting two-stage iteration methods for Hermitian positive definite linear systems J Comp Appl Math 138 (00) [18] Z-Z Bai Z-Q Wang Restrictive preconditioner for conjugate gradient methods for symmetric positive definite linear system J Comp Appl Math 187 (006) 0 6 [19] M Benzi Preconditioning techniques for large linear systems: A survey J Comp Phys 18 (00) [0] M Benzi GH Golub A preconditioner for generalized saddle point problems SIAM J Matrix Anal Appl 6 (004) 0 41 [1] K Chen Matrix Preconditioning Techniques Applications Cambridge Univ Press Cambridge 005 [] HC Elman DJ Silvester AJ Wathen Finite Elements Fast Iterative Solvers: With Applications in Incompressible Fluid Dynamics Oxford Univ Press Oxford 005 [3] A Greenbaum Iterative Methods for Solving Linear Systems SIAM Philadelphia 1997 [4] ICF Ipsen A note on preconditioning nonsymmetric matrices SIAM J Sci Comput 3 (001) [5] C Keller NIM Gould AJ Wathen Constraint preconditioning for indefinite linear systems SIAM J Matrix Anal Appl 1 (000) [6] D Loghin AJ Wathen Schur complement preconditioning for elliptic systems of partial differential equations Numer Linear Algebra Appl 10 (003) [7] D Loghin AJ Wathen Analysis of preconditioners for saddle-point problems SIAM J Sci Comput 5 (004) [8] MF Murphy GH Golub AJ Wathen A note on preconditioning for indefinite linear systems SIAM J Sci Comput 1 (000) [9] DJ Silvester AJ Wathen Fast iterative solution of stabilised Stokes systems II: Using general block preconditioners SIAM J Numer Anal 31 (1994) [30] V Simoncini M Benzi Spectral properties of the Hermitian skew-hermitian splitting preconditioner for saddle point problems SIAM J Matrix Anal Appl 6 (004) [31] HA van der Vorst Iterative Krylov Methods for Large Linear Systems Cambridge Univ Press Cambridge 003 [3] RS Varga Matrix Iterative Analysis Prentice Hall Englewood Cliffs NJ 196

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems Mathematical Problems in Engineering Volume 2013, Article ID 326169, 6 pages http://dx.doi.org/10.1155/2013/326169 Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Zhong-Zhi Bai 1 Department of Mathematics, Fudan University Shanghai

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction Journal of Computational Mathematics Vol.34, No.3, 2016, 300 316. http://www.global-sci.org/jcm doi:10.4208/jcm.1511-m2015-0299 POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Regularized HSS iteration methods for saddle-point linear systems

Regularized HSS iteration methods for saddle-point linear systems BIT Numer Math DOI 10.1007/s10543-016-0636-7 Regularized HSS iteration methods for saddle-point linear systems Zhong-Zhi Bai 1 Michele Benzi 2 Received: 29 January 2016 / Accepted: 20 October 2016 Springer

More information

arxiv: v1 [math.na] 26 Dec 2013

arxiv: v1 [math.na] 26 Dec 2013 General constraint preconditioning iteration method for singular saddle-point problems Ai-Li Yang a,, Guo-Feng Zhang a, Yu-Jiang Wu a,b a School of Mathematics and Statistics, Lanzhou University, Lanzhou

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS J. Appl. Math. & Informatics Vol. 36(208, No. 5-6, pp. 459-474 https://doi.org/0.437/jami.208.459 ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS DAVOD KHOJASTEH SALKUYEH, MARYAM ABDOLMALEKI, SAEED

More information

Comparison results between Jacobi and other iterative methods

Comparison results between Jacobi and other iterative methods Journal of Computational and Applied Mathematics 169 (2004) 45 51 www.elsevier.com/locate/cam Comparison results between Jacobi and other iterative methods Zhuan-De Wang, Ting-Zhu Huang Faculty of Applied

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Block triangular preconditioner for static Maxwell equations*

Block triangular preconditioner for static Maxwell equations* Volume 3, N. 3, pp. 589 61, 11 Copyright 11 SBMAC ISSN 11-85 www.scielo.br/cam Block triangular preconditioner for static Maxwell equations* SHI-LIANG WU 1, TING-ZHU HUANG and LIANG LI 1 School of Mathematics

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

On preconditioned MHSS iteration methods for complex symmetric linear systems

On preconditioned MHSS iteration methods for complex symmetric linear systems DOI 10.1007/s11075-010-9441-6 ORIGINAL PAPER On preconditioned MHSS iteration methods for complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 25 November 2010 / Accepted: 13

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems Zeng et al. Journal of Inequalities and Applications 205 205:355 DOI 0.86/s3660-05-0879-x RESEARCH Open Access Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization

More information

Structured Preconditioners for Saddle Point Problems

Structured Preconditioners for Saddle Point Problems Structured Preconditioners for Saddle Point Problems V. Simoncini Dipartimento di Matematica Università di Bologna valeria@dm.unibo.it p. 1 Collaborators on this project Mario Arioli, RAL, UK Michele Benzi,

More information

Modified HSS iteration methods for a class of complex symmetric linear systems

Modified HSS iteration methods for a class of complex symmetric linear systems Computing (2010) 87:9 111 DOI 10.1007/s00607-010-0077-0 Modified HSS iteration methods for a class of complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 20 October 2009 /

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices Computers Mathematics with Applications 56 (2008) 2048 2053 Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis

More information

Fast solvers for steady incompressible flow

Fast solvers for steady incompressible flow ICFD 25 p.1/21 Fast solvers for steady incompressible flow Andy Wathen Oxford University wathen@comlab.ox.ac.uk http://web.comlab.ox.ac.uk/~wathen/ Joint work with: Howard Elman (University of Maryland,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

A new iterative method for solving a class of complex symmetric system of linear equations

A new iterative method for solving a class of complex symmetric system of linear equations A new iterative method for solving a class of complex symmetric system of linear equations Davod Hezari Davod Khojasteh Salkuyeh Vahid Edalatpour Received: date / Accepted: date Abstract We present a new

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

An advanced ILU preconditioner for the incompressible Navier-Stokes equations An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Structured Preconditioners for Saddle Point Problems

Structured Preconditioners for Saddle Point Problems Structured Preconditioners for Saddle Point Problems V. Simoncini Dipartimento di Matematica Università di Bologna valeria@dm.unibo.it. p.1/16 Collaborators on this project Mario Arioli, RAL, UK Michele

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems Applied Mathematics Letters 19 (2006) 1029 1036 wwwelseviercom/locate/aml The upper Jacobi upper Gauss Seidel type iterative methods for preconditioned linear systems Zhuan-De Wang, Ting-Zhu Huang School

More information

Available online: 19 Oct To link to this article:

Available online: 19 Oct To link to this article: This article was downloaded by: [Academy of Mathematics and System Sciences] On: 11 April 01, At: 00:11 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 107954

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini Indefinite Preconditioners for PDE-constrained optimization problems V. Simoncini Dipartimento di Matematica, Università di Bologna, Italy valeria.simoncini@unibo.it Partly joint work with Debora Sesana,

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 6, No., pp. 377 389 c 004 Society for Industrial and Applied Mathematics SPECTRAL PROPERTIES OF THE HERMITIAN AND SKEW-HERMITIAN SPLITTING PRECONDITIONER FOR SADDLE POINT

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C * Journal of Computational Mathematics Vol.34, No.4, 2016, 437 450. http://www.global-sci.org/jcm doi:10.4208/jcm.1601-m2015-0416 A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 13-10 Comparison of some preconditioners for the incompressible Navier-Stokes equations X. He and C. Vuik ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

On deflation and singular symmetric positive semi-definite matrices

On deflation and singular symmetric positive semi-definite matrices Journal of Computational and Applied Mathematics 206 (2007) 603 614 www.elsevier.com/locate/cam On deflation and singular symmetric positive semi-definite matrices J.M. Tang, C. Vuik Faculty of Electrical

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems NATURAL PRECONDITIONING AND ITERATIVE METHODS FOR SADDLE POINT SYSTEMS JENNIFER PESTANA AND ANDREW J. WATHEN Abstract. The solution of quadratic or locally quadratic extremum problems subject to linear(ized)

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

IN this paper, we investigate spectral properties of block

IN this paper, we investigate spectral properties of block On the Eigenvalues Distribution of Preconditioned Block wo-by-two Matrix Mu-Zheng Zhu and a-e Qi Abstract he spectral properties of a class of block matrix are studied, which arise in the numercial solutions

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Dense Matrices for Biofluids Applications

Dense Matrices for Biofluids Applications Dense Matrices for Biofluids Applications by Liwei Chen A Project Report Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Master

More information

ON THE ROLE OF COMMUTATOR ARGUMENTS IN THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLEMS

ON THE ROLE OF COMMUTATOR ARGUMENTS IN THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLEMS ON THE ROLE OF COUTATOR ARGUENTS IN THE DEVELOPENT OF PARAETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLES JOHN W. PEARSON Abstract. The development of preconditioners for PDE-constrained optimization

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties V. Simoncini Dipartimento di Matematica, Università di ologna valeria@dm.unibo.it 1 Outline

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS MICHELE BENZI AND ZHEN WANG Abstract. We analyze a class of modified augmented Lagrangian-based

More information