Block triangular preconditioner for static Maxwell equations*

Similar documents
ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

Mathematics and Computer Science

Fast Iterative Solution of Saddle Point Problems

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

arxiv: v1 [math.na] 26 Dec 2013

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

A Review of Preconditioning Techniques for Steady Incompressible Flow

Numerical behavior of inexact linear solvers

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

Efficient Solvers for the Navier Stokes Equations in Rotation Form

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

The semi-convergence of GSI method for singular saddle point problems

THE solution of the absolute value equation (AVE) of

Block preconditioners for saddle point systems arising from liquid crystal directors modeling

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Department of Computer Science, University of Illinois at Urbana-Champaign

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

Chebyshev semi-iteration in Preconditioning

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

ETNA Kent State University

Regularized HSS iteration methods for saddle-point linear systems

DELFT UNIVERSITY OF TECHNOLOGY

Preconditioning for Nonsymmetry and Time-dependence

1. Introduction. We consider the system of saddle point linear systems

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The antitriangular factorisation of saddle point matrices

Structured Preconditioners for Saddle Point Problems

9.1 Preconditioned Krylov Subspace Methods

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

DELFT UNIVERSITY OF TECHNOLOGY

1. Introduction. Consider the large and sparse saddle point linear system

Mathematics and Computer Science

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

Some Preconditioning Techniques for Saddle Point Problems

Preconditioners for the incompressible Navier Stokes equations

On the accuracy of saddle point solvers

ON THE ROLE OF COMMUTATOR ARGUMENTS IN THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLEMS

Fast solvers for steady incompressible flow

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Linear Solvers. Andrew Hazel

On the Preconditioning of the Block Tridiagonal Linear System of Equations

IN this paper, we investigate spectral properties of block

M.A. Botchev. September 5, 2014

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems

JURJEN DUINTJER TEBBENS

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

Key words. Parallel iterative solvers, saddle-point linear systems, preconditioners, timeharmonic

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

7.4 The Saddle Point Stokes Problem

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

c 2004 Society for Industrial and Applied Mathematics

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Residual iterative schemes for largescale linear systems

Preconditioned GMRES Revisited

Jae Heon Yun and Yu Du Han

Chapter 3 Transformations

Combination Preconditioning of saddle-point systems for positive definiteness

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Chap 3. Linear Algebra

Splitting Iteration Methods for Positive Definite Linear Systems

Contents. Preface... xi. Introduction...

Iterative Methods for Solving A x = b

AN AUGMENTED LAGRANGIAN APPROACH TO LINEARIZED PROBLEMS IN HYDRODYNAMIC STABILITY

BFGS-like updates of Constraint Preconditioners for sequences of KKT linear systems

Cheat Sheet for MATH461

The Conjugate Gradient Method

Arnoldi Methods in SLEPc

Numerische Mathematik

On solving linear systems arising from Shishkin mesh discretizations

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

ETNA Kent State University

arxiv: v2 [math.na] 1 Feb 2013

Preface to the Second Edition. Preface to the First Edition

Iterative Methods and Multigrid

PROJECTED GMRES AND ITS VARIANTS

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

Structured Preconditioners for Saddle Point Problems

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

ETNA Kent State University

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Adaptive preconditioners for nonlinear systems of equations

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

ETNA Kent State University

Transcription:

Volume 3, N. 3, pp. 589 61, 11 Copyright 11 SBMAC ISSN 11-85 www.scielo.br/cam Block triangular preconditioner for static Maxwell equations* SHI-LIANG WU 1, TING-ZHU HUANG and LIANG LI 1 School of Mathematics and Statistics, Anyang Normal University, Anyang, Henan, 455, PR China School of Mathematics Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, PR China E-mails: wushiliang1999@16.com / tzhuang@uestc.edu.cn, tingzhuhuang@16.com Abstract. In this paper, we explore the block triangular preconditioning techniques applied to the iterative solution of the saddle point linear systems arising from the discretized Maxwell equations. Theoretical analysis shows that all the eigenvalues of the preconditioned matrix are strongly clustered. Numerical experiments are given to demonstrate the efficiency of the presented preconditioner. Mathematical subject classification: 65F1. Key words: Maxwell equations, preconditioner, Krylov subspace method, saddle point system. 1 Introduction We consider the block triangular preconditioner for linear systems arising from the finite element discretization of the following static Maxwell equations: find u and p such that u + p = f in u = in (1.1) u n = on p = on #CAM-45/1. Received: 14/VIII/1. Accepted: 1/VII/11. *This research was supported by 973 Program (7CB311) and NSFC Tianyuan Mathematics Youth Fund (1164, 11683).

59 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS where R is a simply connected domain with connected boundary, and n represents the outward unit normal vector on ; u is vector field, p is the Lagrange multiplier and the datum f is given generic source. There are a large variety of schemes for solving the Maxwell equations, such as the edge finite element method [1,, 6], the domain decomposition method [5, 9], the algebraic multigrid method [3] and so on. Using finite element discretization with Nédélec elements of the first kind [4, 11, 7] for the approximation of the vector field and the standard nodal elements for the multiplier, we obtain the approximate solution of (1.1) by solving the following saddle point linear systems: Ax [ A B B T ] [ ] u p [ ] g = b, (1.) where u R n and p R m are finite arrays denoting the finite element approximations, g R n is the load vector connected with the datum f. The matrix A R n n corresponding to the discrete curl-curl operator is symmetric positive semidefinite with nullity m, B R m n is a discrete divergence operator with rank(b)= m. Specifically, one can see [4, 7, 11] for details. The form of (1.) frequently occurs in a large number of applications, such as the (linearized) Navier-Stokes equations [1], the time-harmonic Maxwell equations [7, 8, 1], the linear programming (LP) problem and the quadratic programming (QP) problem [17, ]. At present, there usually exist four kinds of preconditioners for the saddle point linear systems (1.): block diagonal preconditioner [, 3, 4, 5], block triangular preconditioner [15, 16, 6, 7, 8, 37], constraint preconditioner [9, 3, 31, 3, 33] and Hermitian and skew-hermitian splitting (HSS) preconditioner [34]. One can [1] for a general discussion. Recently, Rees and Greif [17] presented the following triangular preconditioner: [ ] A + B T W 1 B k B T R k =, (1.3) W where W is a symmetric positive definite matrix and k =. It was shown that if A is symmetric positive semidefinite with nullity q (q m), then the preconditioned matrix R 1 k A has five distinct eigenvalues: 1 with algebraic

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 591 multiplicity n m, k± k +4 with algebraic multiplicity q and (kη 1) ± (kη 1) + 4η(1 + η) (1 + η) with algebraic multiplicity (m q) where η > is the generalized eigenvalues of ηax = B T W 1 Bx. Obviously, if m = q, the preconditioned matrix R 1 k A has three distinct eigenvalues: 1 and k± k +4. This is favorable to Krylov subspace methods, which rely on the matrix-vector products and the number of distinct eigenvalues of the preconditioned matrix [13, 19]. It is well-known fact that the preconditioning technique attempts to make the spectral property better to improve the rate of convergence of Krylov subspace methods [14]. In the light of the preconditioning idea, this paper is devoted to giving the new block triangular preconditioners for the linear systems (1.). It is shown that, in contrast to the block triangular preconditioner R k, all the eigenvalues of the proposed new preconditioned matrices are more strongly clustered. Numerical experiments show that the new preconditioners are slightly more efficient than the preconditioner R k. The remainder of this paper is organized as follows. In Section, the new block triangular preconditioners are presented and algebraic properties are derived in detail. In Section 3, a single column nonzero (1,) block preconditioner is presented. In Section 4, numerical experiments are presented. Finally, in Section 5 some conclusions are drawn. Block triangular preconditioner To study the block triangular preconditioners for solving (1.) conveniently, we consider the following saddle point linear systems: [ ] [ ] [ ] A B T Ax B u p = g b, (.1) where A R n n is assumed to be symmetric positive semidefinite with highly nullity and B R m n (m n). We assume that A is nonsingular, from which it follows that rank(b) = m and null(a) null(b) =. (.)

59 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS Now we are concerned with the following block triangular matrix as a preconditioner: [ ] A + B T U 1 B B T H U,W =, W where U, W R m m are symmetric positive definite matrices. Proposition.1. Let {x i } n m i=1 be a basis of the null space of B. Then the vectors (x i, ) are n m linear independent eigenvectors of H 1 U,W A with eigenvalue 1. Proof. The eigenvalue problem of H 1 U,W A is [ ] [ ] [ A B T A + B T U 1 B B x y = λ B T W ] [ ] x. y Then Ax + B T y = λ(a + B T U 1 B)x + λb T y, Bx = λw y. From the nonsingularity of A it follows that λ = and x =. Substituting y = λ 1 W 1 Bx into the first block row, we get λax + (1 λ)b T W 1 Bx = λ (A + B T U 1 B)x. (.3) Assume that x = x i = is a null vector of B. Then (.3) simplifies into (λ λ)ax i =. Since a nonzero null vector of B cannot be a null vector of A by (.) and A is nonsingular, the following natural property is derived: Ax, x > for all = x ker(b). It follows that Ax i = and λ = 1. Since Bx i =, it follows that y = and λ = 1 is an eigenvalue of H 1 U,W A with algebraic multiplicity (at least) n m, whose associated eigenvectors are (x i, ), i = 1,,..., n m.

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 593 Remark.1. From Proposition.1, it is easy to get that H 1 U,W A has at least n m eigenvalues equal to 1 regardless of U and W. The stronger clustering of the eigenvalues can be obtained by choosing two specific matrices such as U = W. To this end, we consider the following indefinite block triangular matrix as a preconditioner: [ ] A + s B T W 1 B (1 + s)b T H s =, W where W R m m is a symmetric positive definite matrix and s >. The next lemma provides that all the eigenvalues of the preconditioned matrix Hs 1 A are strongly clustered, whose proof is similar to that of Theorem.4 in [36]. Lemma.1. Suppose that A is symmetric positive semidefinite with nullity r (r m), B has full rank and λ is an eigenvalue of Hs 1 A with eigenvector (v, q). Then λ = 1 is an eigenvalue of Hs 1 A with multiplicity n, and λ = 1 s is an eigenvalue with multiplicity r. The remaining m r eigenvalues are λ = μ sμ + 1, where μ are the nonzero generalized eigenvalues of μav = B T W 1 Bv. (.4) Assume, in addition, that {x i } r i=1 is a base of the null space of A; {y i} n m i=1 is a base of the null space of B; {z i } m r i=1 is a set of linearly independent vectors that complete null(a) null(b) to a basis of R n. Then a set of linear independent eigenvectors corresponding to λ = 1 can be found: the n m vectors (y i, ), the r vectors (x i, W 1 Bx i ) and the m r vectors (z i, W 1 Bz i ). The r vectors (x i, sw 1 Bx i ) are eigenvectors associated with λ = 1 s. Proof. Let λ be an eigenvalue of Hs 1 A with eigenvector (v, q). Then ] [ ] ] [ ] [ A B T B v q = λ [ A + s B T W 1 B (1 + s)b T W v q,

594 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS which can be rewritten into Av + B T q = λ(a + s B T W 1 B)v + (1 + s)λb T q, (.5) Bv = λwq. (.6) Since A is nonsingular, it is not difficult to get that λ = and v =. By (.6), we get q = λ 1 W 1 Bv. Substituting it into (.5) yields (λ λ)av = ( sλ + (1 + s)λ 1 ) B T W 1 Bv. (.7) If λ = 1, then (.7) is satisfied for any arbitrary nonzero vector v R n, and hence (v, W 1 Bv) is an eigenvector of H 1 s A. If x null(a), then from (.7) we obtain (λ 1)(sλ 1)B T W 1 Bx =, from which it follows that λ = 1 and λ = 1 are eigenvalues associated with s (x, W 1 Bx) and (x, sw 1 Bx), respectively. Assume that λ = 1. Combining (.4) and (.7) yields λ λ = μ ( sλ + (1 + s)λ 1 ). It is easy to see that the rest m r eigenvalues are λ = μ sμ + 1. (.8) A specific set of linear independent eigenvectors for λ = 1 and λ = 1 can be s readily found. From (.), it is not difficult to see that (y i, ), (x i, W 1 Bx i ) and (z i, W 1 Bz i ) are eigenvectors associated with λ = 1. The r vectors (x i, sw 1 Bx i ) are eigenvectors associated with λ = 1 s. Remark.. (.8) gives an explicit formula in terms of the generalized eigenvalues of (.7) and becomes tightly clustered as μ. To illustrate this,

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 595 we examine the case s = 1, i.e., H 1. We have λ = 1 with multiplicity n + r. The rest m r eigenvalues are λ = μ μ + 1. Since λ is a strictly increasing function of μ on (, ), it is easy to find that the remaining eigenvalues λ 1 as μ. In [17], authors considered k = 1, i.e., R 1 and obtained five distinct eigenvalues: λ = 1 (with multiplicity n m), λ ± = 1+ 5 (each with multiplicity q), the remaining eigenvalues are 1 ± 1 + 4μ 1+μ λ ± = (μ > ), which lie in the intervals ( 1 ) 5, ( 1, 1 + 5 ) as μ. Obviously, the eigenvalues of our preconditioned matrix are more clustered than those stated in [17]. That is, the preconditioner H 1 is slightly better than R 1 from the viewpoint of eigenvalue clustering. In fact, it may lead to the illconditioning of H 1 as μ. Golub et al. [18] considered the minimizing of the condition number of the (1,1) block of H 1. The simplest choice is that W 1 = γ I (γ > ), which leads to all the eigenvalues that are not equal to 1 are λ = γ δ 1 + γ δ, where δ is the positive generalized eigenvalue of δ Ax = B T Bx. Obviously, the parameter γ should be chosen to be large such that the eigenvalues are strongly clustered, but not too large such that the (,) block of H 1 is too near singular. From (.), it is to get that the nullity of A must be m at most. Lemma.1 shows that the higher it is, the more strongly the eigenvalues are clustered. Combining Lemma.1 with (1.), the following theorem is given: Theorem.1. Suppose that A is symmetric positive semidefinite with nullity m. Then the preconditioned matrix Hs 1 A has precisely two eigenvalues: λ = 1, of multiplicity n, and λ = 1, of multiplicity m. Moreover, if s = 1, then s

596 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS the preconditioned matrix H 1 1 A has precisely one eigenvalue: λ = 1 with multiplicity n + m. Remark.3. The important consequence of Theorem.1 is that the preconditioned matrix Hs 1 A have minimal polynomials of degree at most. Therefore, a Krylov subspace method like GMRES applied to a preconditioned linear systems with coefficient matrix Hs 1 A converges in iterations or less, in exact arithmetic [38]. By the above discussion, the choice of the optimal parameter s of the preconditioner H s is equal to 1. Investigating the preconditioner R k, it is very difficult to determine the optimal parameter k. Next, we consider the positive definite block triangular preconditioner as follows: [ ] A + h B T W 1 B (1 h)b T T h =, W where W R m m is a symmetric positive definite matrix and h >. Similarly, we can get the following results. Lemma.. Suppose that A is symmetric positive semidefinite with nullity r (r m), B has full rank and λ is an eigenvalue of T 1 h A with eigenvector (v, q). Then λ = 1 is an eigenvalue of T 1 h A with multiplicity n, and λ = 1 h is an eigenvalue with multiplicity r. The remaining m r eigenvalues are λ = μ hμ + 1, (.9) where μ are defined by (.4). In addition, {x i } r i=1, {y i} n m i=1 and {z i} m r i=1 are defined by Lemma.1. Then a set of linear independent eigenvectors corresponding to λ = 1 can be found: the n-m vectors (y i, ), the r vectors (x i, W 1 Bx i ) and the m r vectors (z i, W 1 Bz i ). The r vectors (x i, hw 1 Bx i ) are eigenvectors associated with λ = 1 h. Theorem.. Suppose that A is symmetric positive semidefinite with nullity m. Then the preconditioned matrix T 1 h A has precisely two eigenvalues: λ = 1, of multiplicity n, and λ = 1, of multiplicity m. Moreover, if h = 1, h the preconditioned matrix T 1 1 A has precisely two eigenvalues: λ = 1, of multiplicity n, and λ = 1, of multiplicity m.

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 597 From Theorem., it is not difficult to find that the choice of the optimal parameter h(> ) of the preconditioner T h is equal to 1. 3 A single column nonzero (1,) block preconditioner We consider the following single column nonzero (1,) block preconditioner: [ ] A + B T W B b i ei T T =, W where b i denotes the column i of B T, and e i is the i-th column of the m m identify matrix, W = γ I (γ > ) and W = 1 γ I + 1 γ e ie T i. It is not difficult to find that A + B T W B is nonsingular because A is symmetric positive semidefinite and W is symmetric positive definite. The spectral properties of T 1 A are presented in the following theorem: Theorem 3.1. The preconditioned matrix T 1 A has λ = 1 with multiplicity n and λ = 1 with multiplicity m 1. Corresponding eigenvectors can be explicitly found in terms of the null space and column space of A. Proof. Let λ be any eigenvalue of T 1 A, and z = (x, y) be the corresponding eigenvector. Then T 1 Az = λz, i.e., [ ] [ ] ] [ ] A B T B x y = λ [ A + B T W B b i e T i W x y. (3.1) Let Q R = [Y Z][R T T ] T be an orthogonal factorization of B T, where R R m m is upper triangular, Y R n m, and Z R n (n m) is a basis of the null space of B. Premultiplying (3.1) by the nonsingular and square matrix Z T P = Y T I

598 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS and postmultiplying by its transpose gives Z T AZ Z T AY x z Y T AZ Y T AY R x y R T y Z T AZ Z T AY = λ Y T AZ Y T AY + 1 ( ) R R T + r i ri T r i ei T γ γ I x z x y. y By inspection, we check λ = 1, which reduces the above equation to 1 x ( ) z R R T + r i ri T R + r i ei T γ x y =. R T γ I y Immediately, there exist n m corresponding eigenvectors of the form (x z, x y, y) = (u,, ) for (n m) linearly independent vectors u. At the same time, we can find that there have m linearly independent eigenvectors, corresponding to λ = 1, which can be written (x z, x y, y) = (, x y, 1 γ x y). That is, there exist n linearly independent eigenvectors corresponding to λ = 1. It is not difficult to get that there exist m 1 eigenvectors corresponding to λ = 1. Indeed, substituting λ = 1 requires finding a solution to Z T AZ Z T AY Y T AZ Y T AY + 1 x ( ) z R R T + r i ri T R r i ei T γ x y =. R T γ I y Vectors x z, x y, y can be found to solve this equation. Consider any x = Z xz + Y xy in the null space of A. Then AZ x z + AY x y =, and we are left with finding a y such that 1 γ ( ) R R T + r i ri T R T [ R r i ei T γ I x y y ] =

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 599 for the fixed xy. Further, we get 1 γ ( ) R R T + r i ri T x y + ( ) R r i ei T y =, (3.) R T xy + γ y =. (3.3) By (3.3), we get y = 1 γ RT x y. Substituting it into (3.) requires γ r iri T x y =. In general, we can find exactly m 1 eigenvectors orthogonal to r i. That is, there are m 1 eigenvectors of the form (x z, x y, y) = (xz, x y, 1 γ RT xy ), where xy is orthogonal to r i, corresponding to λ = 1. Remark 3.1. The following preconditioner was considered in [17], that is, [ ] A + B ˆM T ˉW B b i ei T =, W where W = γ I (γ > ) and ˉW = 1 γ I 1 γ e ie T i. In practice, the preconditioner ˆM can be with riskiness. In fact, if A is a symmetric positive semidefinite matrix with highly nullity, then A + 1 γ BT (I e i ei T )B may become singular because I e i ei T is symmetric positive semidefinite. In our numerical experiments, we find that the preconditioner ˆM for solving (1.) leads to the deterioration of performance when i = 1. In this case, the preconditioner ˆM is singular. 4 Numerical experiments In this section, two examples are given to demonstrate the performance of our preconditioning approach. In our numerical experiments, all the computations are done with MATLAB 7.. The machine we have used is a PC-Intel(R), Core(TM) CPU T7. GHz, 14M of RAM. The initial guess is taken to be x () = and the stopping criterion is chosen as follows: b Ax (k) 1 6 b. Example 1. We consider the two-dimensional static Maxwell equations (1.1) in an L-shaped domain ([ 1, 1] [ 1, 1] [ 1, ] [, 1]). For simplicity,

6 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS 1.8.6.4. -. -.4 -.6 -.8-1 -1 -.8 -.6 -.4 -...4.6.8 1 Figure 1 8 8 mesh dissection. we take a finite element subdivision like Figure 1. Information on sparsity of the relevant matrices is given in Table 1. The test problem is set up so that the right hand side function is equal to 1 throughout the domain. Mesh n m nz(a) nz(b) order of A 3 3 4 961 1948 696 31 64 64 988 3969 4493 9198 1357 18 18 3668 1619 18 1198 5737 56 56 146944 655 73676 48539 11969 Table 1 Values of n and m, nonzeros in A and B, order of A. Here we mainly test four preconditioners: R 1, H 1, T 1 and T. From Remark., based on the condition number of the matrix, it ensures that the norm of the augmenting term is not too small in comparison with A [35], we set W 1 = A 1 I. One can see [35] for details. B 1 It is well known that the eigenvalue distribution of the preconditioned matrix gives important insight in the convergence behavior of the preconditioned Krylov subspace methods. For simplicity, we investigate the eigenvalue distribution of the preconditioned matrices R 1 1 1A and H1 A. Figure plots the

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 61 8 x 1 15 6 4 Imaginary axis 4 6 8 1.5.5 1 1.5 Real axis 1.5 x 1 7 (a) k = 1 1.5 Imaginary axis.5 1 1.5..4.6.8 1 1. 1.4 Real axis (b) s = 1 Figure Eigenvalues of R 1 1 1A (left) and H1 A (right) with 16 16 grid.

6 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS eigenvalues of the preconditioned matrices R 1 1 1A and H1 A for 16 16 grid, where left corresponds to R 1 1 1A and right corresponds to H1 A. It is easy to see that the clustering of the eigenvalues of H 1 1 A is more stronger than that of H 1 1 A in Figure. To investigating the performance of the above four preconditioners, in our numerical experiments some Krylov subspace methods with BiCGStab and GMRES(l) are adopted. As is known, there is no general rule to choose the restart parameter l (l n + m). This is mostly a matter of experience. To illustrate the efficiency of our methods, we take l =. In Tables and 3, we present some results to illustrate the convergence behaviors of BiCGStab and GMRES() preconditioned by R 1, H 1, T 1 and T, respectively. Here i of T is equal to 1. Figures 3 and 4 correspond to Tables and 3, which show the iteration numbers and relative residuals of preconditioned BiCGStab and GM- RES() employed to solve the saddle point linear systems (1.), where left in Figures 3-4 corresponds to BiCGStab and right in Figures 3-4 corresponds to GMRES(). The purpose of these experiments is just to investigate the influence of the eigenvalue distribution on the convergence behavior of BiCGStab and GMRES() iterations. IT denotes the number of iteration. CPU(s) denotes the time (in seconds) required to solve a problem. Mesh R 1 H 1 T 1 T IT CPU(s) IT CPU(s) IT CPU(s) IT CPU(s) 3 3 5.1563 3.938 3.146 5.1563 64 64 5.881 3.4844 3.881 4.6563 18 18 5 4.7188 3.7813 3 6.156 4 3.65 56 56 5 4.896 1.65 3 33.4688 4 19.5469 Table Iteration number and CPU(s) of BiCGStab method. From Tables -3, it is not difficult to see that the exact preconditioners R 1, H 1, T 1 and T are in relation to the CPU time, and the iteration numbers of the exact preconditioners R 1, H 1, T 1 and T are insensitive to the changes in the mesh size by using BiCGStab and GMRES() to solve the saddle point linear systems (1.). Although the exact preconditioners R 1, H 1, T 1 and T are quite competitive in terms of convergence rate, robustness and efficiency, the preconditioner H 1 outperforms the preconditioners R 1, T 1 and T from iteration

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 63 Mesh R 1 H 1 T 1 T IT CPU(s) IT CPU(s) IT CPU(s) IT CPU(s) 3 3 3.1563.15.1719 3.146 64 64 3.815.65 1.194 3.7813 18 18 3 4.6719 3.7656 7.731 3 4.4844 56 56 3 19.6875 14.6563 33.65 3 19.375 Table 3 Iteration number and CPU(s) of GMRES(). Matrix name order of A n m nnz(a) GHSindef/k1san 67759 46954 85 559774 Table 4 Characteristics of the test matrix from the UF Sparse Matrix Collection. number and CPU time. Compared with the preconditioners R 1, T 1 and T, the preconditioner H 1 may be the best choice. Comparing the performance of BiCGStab to the performance of GMRES() is not within our stated goals, but having results using more than one Krylov solver allows us to confirm the consistency of convergence behavior for most problems. Example. A matrix from the UF Sparse Matrix Collection [39]. The test matrix is GHSindef/k1san, coming from UF Sparse Matrix Collection, which is an ill-conditioned matrix from Aug. system modelling the underground of Strazpod Ralskem mine by MFE. The characteristics of the test matrix are listed in Table 4. The numerical results from using the BiCGStab and GMRES() methods preconditioned by the above four preconditioners to solve the corresponding saddle point linear systems are given in Table 5. Figure 5 is in concord with Table 5, where left in Figure 5 corresponds to BiCGStab and right in Figure 5 corresponds to GMRES(). From Table 5, it is easy to see that the preconditioners R 1, H 1, T 1 and T are really efficient when BiCGStab and GMRES() methods are used to solve R 1 H 1 T 1 T IT CPU(s) IT CPU(s) IT CPU(s) IT CPU(s) BiCGStab 53 96.3594 17 31.565 19 38.1563 1 5.646 GMRES() 31 61.396 13 6.656 13 7.896 14 39 Table 5 Iteration number and CPU(s) of BiCGStab and GMRES().

64 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS 4 R 1 H 1 T 1 T log(1( r (k / r () ))) 4 6 8 1 1 14 1 3 4 5 6 7 Iteration number 4 (a) 3 3 mesh R 1 H 1 T 1 T log(1( r (k) / r () )) 4 6 8 1 1 1 1.5.5 3 3.5 4 4.5 5 Iteration number (b) 3 3 mesh Figure 3 (to be continue) Iteration number of BiCGStab (top) and GMRES() (bottom).

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 65 4 R 1 H 1 T 1 T log(1( r (k) / r () )) 4 6 8 1 1 14 1 3 4 5 6 7 Iteration number 4 (c) 64 64 mesh R 1 H 1 T 1 T log(1( r (k) / r () )) 4 6 8 1 1 1 1.5.5 3 3.5 4 4.5 5 Iteration number (d) 64 64 mesh Figure 3 (to be continue) Iteration number of BiCGStab (top) and GMRES() (bottom).

66 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS 4 R 1 H 1 T 1 T log(1( r (k) / r () )) 4 6 8 1 1 14 1 3 4 5 6 7 Iteration number 4 (e) 18 18 mesh R 1 H 1 T 1 T log(1( r (k) / r () )) 4 6 8 1 1 1.5.5 3 3.5 4 4.5 5 Iteration number (f) 18 18 mesh Figure 3 (conclusion) Iteration number of BiCGStab (top) and GMRES() (bottom).

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 67 4 R 1 H1 T1 T Log(1( r (k) / r () )) 4 6 8 1 Log(1( r (k) /( r () )) 1 1 3 4 5 6 7 Iteration number 4 4 6 R 1 H1 T1 T 8 1 1 1.5.5 3 3.5 4 4.5 5 Iteration number Figure 4 Iteration number of BiCGStab (top) and GMRES() (bottom) with 56 56.

68 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS 5 4 3 R 1 H1 T1 T Log(1( r (k) / r () )) 1 1 3 Log(1( r (k) / r () )) 4 1 3 4 5 6 Iteration number 6 5 4 3 1 1 3 R 1 H1 T1 T 4 5 1 15 5 3 35 Iteration number Figure 5 Iteration number of BiCGStab (top) and GMRES() (bottom) method for GHSindef/k1san.

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 69 the saddle point systems with the coefficient matrix being GHSindef/k1san. It is not difficult to find that the preconditioner H 1 are superior to the preconditioners R 1, T 1 and T from iteration number and CPU time under certain conditions. That is, the preconditioner H 1 is quite competitive in terms of convergence rate, robustness and efficiency. 5 Conclusion In this paper, we have proposed three types of block triangular preconditioners for iteratively solving linear systems arising from finite element discretization of the Maxwell equations. The preconditioners have the attractive property to improve the eigenvalue clustering of the coefficient matrix. Furthermore, numerical experiments confirm the effectiveness of our preconditioners. In fact, in Section, our methodology can extend the unsymmetrical case, that is, the (1,) block and the (,1) block of the saddle point systems are unsymmetrical. Acknowledgments. The authors would like to express their great gratitude to the referees and J.M. Martínez for comments and constructive suggestions; there were very valuable for improving the quality of our manuscript. REFERENCES [1] Z. Chen, Q. Du and J. Zou, Finite element methods with matching and nonmatching meshes for Maxwell equations with discontinuous coeffcients. SIAM J. Numer. Anal., 37 (1999), 154 157. [] J.P. Ciarlet and J. Zou, Fully discrete finite element approaches for time-dependent Maxwell s equations. Numer. Math., 8 (1999), 193 19. [3] J. Gopalakrishnan, J.E. Pasciak and L.F. Demkowicz, Analysis of a multigrid algorithm for time harmonic Maxwell equations. SIAM J. Numer. Anal., 4 (4), 9 18. [4] P. Monk, Finite Element Methods for Maxwell s Equations. Oxford University Press, New York (3). [5] J. Gopalakrishnan and J. Pasciak, Overlapping Schwarz preconditioners for indefnite time harmonic Maxwell s equations. Math. Comp., 7 (3), 1 16.

61 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS [6] P. Monk, Analysis of a fnite element method for Maxwell s equations. SIAM J. Numer. Anal., 9 (199), 3 56. [7] C. Greif and D. Schötzau, Preconditioners for the discretized time-harmonic Maxwell equaitons in mixed form. Numer. Lin. Alg. Appl., 14 (7), 81 97. [8] C. Greif and D. Schötzau, Preconditioners for saddle point linear systems with highly singular (1,1) blocks. ETNA, (6), 114 11. [9] A. Toselli, Overlapping Schwarz methods for Maxwell s equations in three dimensions. Numer. Math., 86 (), 733 75. [1] Q. Hu and J. Zou, Substructuring preconditioners for saddle-point problems arising from Maxwell s equations in three dimensions. Math. Comput., 73 (4), 35 61. [11] J.C. Nédélec, Mixed finite elements in R 3. Numer. Math., 35 (198), 315 341. [1] M. Benzi, G.H. Golub and J. Liesen, Numerical solution of saddle point problems. Acta Numerica., 14 (5), 1 137. [13] Y. Saad, Iterative Methods for Sparse Linear Systems. Second edition, SIAM, Philadelphia, PA (3). [14] A. Greenbaum, Iterative Methods for Solving Linear Systems. Frontiers in Appl. Math., 17, SIAM, Philadelphia (1997). [15] M. Benzi and J. Liu, Block preconditioning for saddle point systems with indefinite (1,1) block. Inter. J. Comput. Math., 84 (7), 1117 119. [16] M. Benzi and M.A. Olshanskii, An augmented lagrangian-based approach to the Oseen problem. SIAM J. Sci. Comput., 8 (6), 95 113. [17] T. Rees and C. Greif, A preconditioner for linear systems arising from interior point optimization methods. SIAM J. Sci, Comput., 9(7), 199 7. [18] G.H. Golub, C. Greif and J.M. Varah, An algebraic analysis of a block diagonal preconditoner for saddle point systems. SIAM J. Matrix Anal. Appl., 7 (6), 779 79. [19] J.W. Demmel, Applied Numerical Linear Algebra. SIAM, Philadelphia (1997). [] S. Cafieri, M. D Apuzzo, V. De Simone and D. Di Serafino, On the iterative solution of KKT systems in potential reduction software for large-scale quadratic problems. Comput. Optim. Appl., 38 (7), 7 45. [1] H.C. Elman, Preconditioning for the steady-state Navier-Stokes equations with low viscosity. SIAM J. Sci. Comput., (1999), 199 1316.

SHI-LIANG WU, TING-ZHU HUANG and LIANG LI 611 [] T. Rusten and R. Winther, A preconditioned iterative method for saddle point problems. SIAM J. Matrix Anal. Appl., 13 (199), 887 94. [3] D. Silvester and A.J. Wathen, Fast iterative solution of stabilized Stokes systems, Part II: Using general block preconditioners. SIAM J. Numer. Anal., 31 (1994), 135 1367. [4] A.J. Wathen, B. Fischer and D. Silvester, The convergence rate of the minimal residual method for the Stokes problem. Numer. Math., 71 (1995), 11 134. [5] A. Klawonn, An optimal perconditioner for a class of saddle point problems with a penalty term. SIAM J. Sci. Comput., 19 (1998), 54 55. [6] V. Simoncini, Block triangular preconditioners for symmetric saddle-point problems. Appl. Numer. Math., 49 (4), 63 8. [7] A. Klawonn, Block-triangular preconditioners for saddle point problems with a penalty term. SIAM J. Sci. Comput., 19 (1998), 17 184. [8] P. Krzyzanowski, On block preconditioners for nonsymmetric saddle point problems. SIAM J. Sci. Comput., 3 (1), 157 169. [9] H.S. Dollar and A.J. Wathen, Approximate factorization constraint preconditioners for saddle-point matrices. SIAM J. Sci. Comput., 7 (6), 1555 157. [3] H.S. Dollar, Constraint-style preconditioners for regularized saddle point problems. SIAM J. Matrix Anal. Appl., 9 (7), 7 684. [31] E. De Sturler and J. Liesen, Block-diagonal and constraint preconditioners for nonsymmetric indefinite linear Systems, Part I: Theory. SIAM J. Sci. Comput., 6 (5), 1598 1619. [3] A. Forsgren, P.E. Gill and J.D. Griffin, Iterative solution of augmented systems arising in interior methods. SIAM J. Optim., 18 (7), 666 69. [33] C. Keller, N.I.M. Gould and A.J. Wathen, Constraint preconditioning for indefinite linear systems. SIAM J. Matrix Anal. Appl., 1 (), 13 1317. [34] V. Simoncini and M. Benzi, Spectral properties of the Hermitian and skew- Hermitian splitting preconditioner for saddle point problems. SIAM J. Matrix Anal. Appl., 6 (4), 377 389. [35] G.H. Golub and C. Greif, On solving block-structured indefinite linear systems. SIAM J. Sci. Comput., 4 (3), 76 9. [36] Z.H. Cao, Augmentation block preconditioners for saddle point-type matrices with singular (1,1) blocks. Numer. Linear Algebra Appl., 15 (8), 515 533.

61 BLOCK TRIANGULAR PRECONDITIONER FOR STATIC MAXWELL EQUATIONS [37] S.L. Wu, T.Z. Huang and C.X. Li, Generalized block triangular preconditoner for symmetric saddle point problems. Computing, 84 (9), 183 8. [38] I.C.F. Ipsen, A note on preconditioning nonsymmetric matrices. SIAM J. Matrix Anal. Appl., 3 (1), 15 151. [39] UF Sparse Matrix Collection: Garon Group. http://www.cise.ufl.edu/research/sparse/matrices.