Block preconditioners for saddle point systems arising from liquid crystal directors modeling

Size: px
Start display at page:

Download "Block preconditioners for saddle point systems arising from liquid crystal directors modeling"

Transcription

1 Noname manuscript No. (will be inserted by the editor) Block preconditioners for saddle point systems arising from liquid crystal directors modeling Fatemeh Panjeh Ali Beik Michele Benzi Received: date / Accepted: date Abstract We analyze two types of block preconditioners for a class of saddle point problems arising from the modeling of liquid crystal directors using finite elements. Spectral properties of the preconditioned matrices are investigated, and numerical experiments are performed to assess the behavior of preconditioned iterations using both exact and inexact versions of the preconditioners. Keywords liquid crystals saddle point problems Krylov methods preconditioning Mathematics Subject Classification (2) MSC 65F 65F8 65F5 Introduction In this paper we study the performance of two types of block preconditioners which can be used in conjunction with Krylov subspace methods for solving large and sparse saddle point systems of the form A B C x A u B C y z = b b 2 b 3 b, () where A R n n and R p p are both symmetric positive definite (SP), B R m n, and C R p n. Sequences of linear systems of the type () appear in liquid crystal modeling; see section 2 and [6]. his paper is a continuation of the recent work [3], where we considered the solution of linear systems of the form () under the hypothesis that is positive semidefinite, including the case =. he remainder of the paper is organized as follows. In section 2 we provide some information on the numerical problem and its origin. Section 3 briefly discusses related work F. P. A. Beik epartment of Mathematics, Vali-e-Asr University of Rafsanjan, PO Box 58, Rafsanjan, Iran f.beik@vru.ac.ir M. Benzi epartment of Mathematics and Computer Science, Emory University, Atlanta, Georgia 3322, USA. Current address: Scuola Normale Superiore, Piazza dei Cavalieri, 7, 5626 Pisa, Italy michele.benzi@sns.it

2 2 Fatemeh Panjeh Ali Beik, Michele Benzi and contrasts the results of this paper with the ones in [3]. In section 4 we describe the two preconditioners and analyze their theoretical properties. Numerical experiments showing the effectiveness of the preconditioners are discussed in section 5, and brief conclusive remarks are given in section 6. Notation. hroughout the paper, for a given square matrix A with real eigenvalues, the notations λ min (A) and λ max (A) denote the minimum and maximum eigenvalues of A, respectively. he set of all eigenvalues (spectrum) of A is represented by σ(a). he symbol A (A ) means that A is a SP (symmetric positive semi-definite) matrix. Moreover for two given matrices A and B, by A B (A B) we mean A B (A B ). he notations ker(b) and range(b) stand for the null space (or kernel) and the column space (or range) of the matrix B, respectively. Given a complex vector x, its conjugate transpose is denoted by x. Moreover, we write (x;y;z) to denote the vector (x,y,z ). 2 he problem As discussed in [6] (see also [, 2]), mathematical models for understanding the orientational properties of liquid crystals lead to the minimization of the free energy, which is a functional of the form F[u,v,w,U] = [ (u 2 2 z + v 2 z + w 2 z) α 2 (β + w 2 )Uz 2 ] dz, (2) where u,v,w and U are functions of the space variable z [,] that are required to satisfy suitable end-point conditions, u z = du dz (etc.), and α,β are prescribed (positive) parameters. he problem is discretized by means of a uniform piecewise-linear finite element method. With a mesh-size h = k+ (with k+ the number of cells), using nodal quadrature and the prescribed boundary conditions leads to replacing the functional F with a function f of 4k variables: where F[u,v,w,U] f(u,...,u k,v,...,v k,w,...,w k,u,...,u k ), f = h k ( ) u j+ u 2 ( ) j v j+ v 2 ( j w j+ w j j= h h h ( )] w 2 α [β 2 j + w 2 (Uj+ ) j+ U 2 j +, 2 h with z j = jh, j =,...,k+, and u j u(z j ), etc. he irichlet end point conditions u() =, v() =, w() =, U() =, u() =, v() =, w() =, U() = are used to eliminate u, v, w, U and u k+, v k+, and U k+. he free energy functional (2) must be minimized subject to the unit vector constraint. For the discrete (finite-dimensional) problem, this constraint is enforced by imposing that the solution components u j, v j and w j are such that ) 2 u 2 j + v 2 j + w 2 j =, j =,...,k.

3 Block preconditioners for a class of saddle point systems 3 Introducing Lagrange multipliers λ,...,λ k, the problem reduces to finding the critical points of the Lagrangian function L = f + 2 k j= λ j (u 2 j + v2 j + w2 j ). Setting the partial derivatives of L to zero results in the nonlinear system of 5k equations in as many unknowns L(x) = where the unknown vector x R 5k contains the values (u j,v j,w j ) with j k, λ j with j k, and U j with j k (in this order). his nonlinear system is solved by Newton s method, leading to a linear system of the form 2 L(x (l) )δx (l) = L(x (l) ) (3) at each step l. Here we denote by 2 L(x (l) ) the Hessian of L evaluated at the current approximate solution x (l). In this paper we investigate efficient preconditioned Krylov subspace methods for the solution of these linear systems, continuing the work in [3]. For issues related to the nonlinear (Newton) iteration, the reader is referred to [6]. As shown in [6], the Hessian has the following block 3 3 structure: A B C 2 L = B, C where A is n n, B is m n, C is p n and is p p with n = 3k and m = p = k. Hence, it is necessary to solve a system of the form () at each Newton step. All the nonzero blocks in A are sparse and structured. A detailed description of the submatrices A, B, C and is given in [6]. Here we mention that A is SP, B has full column rank and is such that BB is diagonal (and indeed BB = I m if the unit vector constraints are satisfied exactly), C is rank deficient, and is tridiagonal and SP. Following [6], for most of our experiments we used the following values of the parameters α and β appearing in (2): α =.5α c and β =.5 where α c 2.72 is known as the critical switching value. In order to assess the robustness of our preconditioners with respect to α, the main control parameter, we also perform some tests with different values of α, see section 5. he following proposition provides a necessary and sufficient condition for the invertibility of A in (). Proposition Assume that A and are both SP matrices. hen matrix A is invertible if and only if B has full column rank. Proof See [3, Proposition 2.]. Hence, the linear systems () arising from the application of Newton s method to the nonlinear system L(x) = are uniquely solvable. We are assuming here that the Hessian is being evaluated away from bifurcation points and turning points, see again [6].

4 4 Fatemeh Panjeh Ali Beik, Michele Benzi 3 Related work A few iterative methods for the solution of saddle point problems arising from liquid crystal director modeling have been proposed in the literature. In [], a coupled multigrid method with Vanka-type relaxation has been developed for solving the 3 3 block-structured linear systems arising at each Newton step. Numerical experiments show very good scalability with respect to problem size for a 2 model problem. For the two largest problem sizes tested, the propsoed multigrid solver outperforms a state-of-the-art sparse direct solver. In [6], a preconditioned nullspace method has been proposed and tested on the same model problems considered in the present paper. he paper also includes tests with MIN- RES used with a block diagonal preconditioner. Although these preconditioners display mesh-independent convergence rates, the timings reported in [6] (see able 6.6) are rather high, given the problem sizes considered (for example, 25 seconds for a problem with k = 65,536), even allowing for the difference in computers used for the tests. his may be due to the high set-up costs incurred by the AMG solver used for the inner solves needed to implement the preconditioners. In our recent work [3] we investigated a variety of block diagonal and block triangular preconditioners for the solution of (), both with = and. For the case, the best results were obtained using inexact variants of the following two block triangular preconditioners: A B C P = BA B, (4) (+CA C ) and A B C Pˆ = BA B BA C. (5) (+CA C ) he theory developed in [3] shows that the spectrum of the exactly preconditioned matrix P ˆ A lies in the interval ( 2,], independent of problem parameters. Numerical experiments demonstrate mesh-independent convergence rates for both preconditioners, and excellent scalability in terms of CPU times when inexact variants are used in combination with FGMRES. In the following section we propose and analyze new block preconditioners for the solution of linear systems of the form () with. hese preconditioners are quite different from those proposed in [3], and overall appear to be superior to the best methods in [3] in terms of CPU times, especially for large problem sizes. 4 escription and analysis of the preconditioners In this section we consider the following two block preconditioners: A B A B C P γb and P γb, (6) where γ is a given real nonzero parameter. he first preconditioner is block diagonal, the second block triangular. Moreover, P is symmetric indefinite for γ =. Note that both matrices are nonsingular under the assumptions of Proposition.

5 Block preconditioners for a class of saddle point systems 5 In order to apply block preconditioners of the form (6) within a Krylov-type method, it is necessary to solve (exactly or inexactly, see below) the following saddle point problem at each step: [ A B γb ][ w w 2 ] = [ r r 2 ]. (7) his system is uniquely solvable for all γ under our assumptions on A and B (see, e.g., [4]). o this end, the following two steps are performed: Step I. Solve Sw 2 = BA r γ r 2 to find w 2, Step II. Solve Aw = r B w 2 to find w, where we have set S = BA B. We defer to section 5 a brief discussion of how the special properties of the matrices A and B arising from the liquid crystal director model may be exploited to perform steps I-II efficiently. he remainder of this section is divided into two main parts. In the first part we present some results on the eigenvalue distribution of the preconditioned matrices corresponding to P and P. he second part is devoted to discussing sufficient conditions which guarantee mesh-independent convergence rates. 4. Spectral analysis of the preconditioned matrices In the following we determine bounds for the eigenvalues of the preconditioned matrices corresponding to the block preconditioners P and P. o this end we establish two theorems. he results are then illustrated numerically on a linear system of moderate size. heorem Suppose that A, and B has full column rank. hen σ(p {, A ) } { } λ max (C ± iµ < µ C). (8) γ λ min (A) Proof Let λ be an arbitrary eigenvalue of P A with the corresponding eigenvector (x;y;z), i.e., Ax+B y+c z = λ(ax+b y), (9) Bx = γλbx, () Cx z = λz. () We claim that λ = is an eigenvalue of P A. Indeed, consider the following two cases: If γ = then λ = is an eigenvalue for P A with corresponding eigenvectors of the form (;y;) where y is an arbitrary nonzero vector. Moreover in the case that ker(c) {}, then (x;y;) is an eigenvector for λ = where x ker(c) and y is an arbitrary vector. If γ then the eigenvectors associated with λ = are of the form (;y;), where y is an arbitrary nonzero vector. Moreover, in the case that ker(b) ker(c) {}, then (x;y;) is an eigenvector corresponding to λ = where x ker(b) ker(c) and y is an arbitrary vector.

6 6 Fatemeh Panjeh Ali Beik, Michele Benzi Now let us focus on the case λ, γ. he assumption λ γ together with () implies that Bx =. Since λ, the vector x must be nonzero, otherwise x = implies that y and z are both zero which is impossible due to the fact that (x;y;z) is an eigenvector. From (9), we derive (λ )x Ax = x C z, (2) and from () it can be deduced that z = λ Cx. Substituting the last expression into (2) we obtain (λ ) 2 x Ax = x C Cx, (3) showing that λ must be purely imaginary, i.e., that λ = ± iµ for some real number µ. he bounds on µ follow from the well-known Rayleigh Ritz heorem, see [5, page 76]. Remark In heorem, assume that γ. From (8) and the proof of the theorem one can see that λ = γ may or may not be an eigenvalue of P A. It can be seen that λ = γ is an eigenvalue if ker(c) range(a B ) {}. (4) he above condition is a sufficient condition. More precisely, if x ker(c) range(a B ), then x = A B y for some nonzero vector y. herefore, we may conclude that λ = γ σ(p A ) with the corresponding eigenvector (x; y;). Conversely, assume that γ σ(p A ). hen the corresponding eigenvectors (x;y;z) must satisfy the following two conditions: ( ) γ [Ax+B C z = y ], z = γ ( γ γ ) Cx. he above conditions are necessary and sufficient for λ = γ with corresponding eigenvectors of the form (x;y;z). to be an eigenvalue of P Next, we consider the eigenvalue distribution for the block triangular preconditioner. o this end, we establish the following theorem. heorem 2 Suppose that A, and B has full column rank. hen all of the eigenvalues of P A are real and { } [ σ(p A ),+ λ max(ca C ] ). (5) γ λ min () Proof Suppose that λ is an arbitrary eigenvalue of P A with corresponding eigenvector (x;y;z), i.e., Ax+B y+c z = λ(ax+b y+c z), (6) Bx = γλbx, (7) Cx z = λz. (8) It turns out that λ = is an eigenvalue of P A. o characterize the corresponding eigenvectors we consider the following two cases: A

7 Block preconditioners for a class of saddle point systems 7 When γ =, then λ = is an eigenvalue with the corresponding eigenvector (x;y;z) for any x ker(c), with the vectors y and z being arbitrary (with at least one of y or z nonzero in case x = ). For the case that γ, the associated eigenvector for λ = is of the form (x;y;z) where x ker(b) ker(c) and y and z are arbitrary vectors. Again, if x = (for example if ker(b) ker(c) = {}), then either y or z must be nonzero. In the rest of the proof we assume that λ, γ. herefore, from (7), we deduce that Bx =. Note that z cannot be zero, otherwise from (6), the positive definiteness of A and the fact that Bx = we would have that x =. But when x and z are both zero vectors, then y = in view of the fact that B has full column rank, which is contrary to our assumption that (x;y;z) is an eigenvector. Consequently, we may assume that z is a nonzero vector such that z 2 =. Using (6), we have y = S BA C z, where S = BA B (note that S is SP, therefore invertible). Multiplying both sides of (6) by CA on the left and using the previous relation, we obtain Cx = CA B S BA C z CA C z. Substituting the above relation into (8), we derive As a result, ( λ)z = CA B S BA C z CA C z. λ = z CA B S BA C z z CA C z z z ( ) z CA 2 I A 2 B S BA 2 A 2 C z = + z. (9) z his expression shows that the spectrum is real. Notice that z / ker(c ), since here z ker(c ) implies λ = which is contrary to our earlier assumption that λ. herefore, from (9) and using the fact that P = I A 2 B S BA 2 is an orthogonal projector and thus P, we conclude that Now the result follows from the Rayleigh-Ritz heorem. < λ + z CA C z z. (2) z Remark 2 Let us consider the case that γ. From (5) it is seen that λ = γ may (or may not) be an eigenvalue of P A. Similarly to Remark, we can see that (4) is a sufficient condition for λ = γ to be an eigenvalue with the same eigenvector structure as given in the previous remark. Moreover, it is easy to see that ( γ,(x;y;z)) is an eigenpair of P A if and only if the following two conditions hold: for γ and (x;y;z) (;;). = Ax+B y+c z, ( ) γ z = Cx γ

8 8 Fatemeh Panjeh Ali Beik, Michele Benzi Fig. op: spectrum of P A for γ = (left), γ = (center) and γ = 2 (right). P A for γ = (left), γ = (center) and γ = 2 (right). Bottom: spectrum of he spectra of each of the preconditioned matrices P A and P A for different values of γ are depicted in Figure for a problem of size hese plots show strong clustering of the eigenvalues, and spectral distributions consistent with those predicted by heorems and 2. We mention that the spectrum of A lies in the union of the intervals [ , ] and [2.56 2, ], for a spectral condition number close to We remark that under a certain additional assumption, we can determine more explicit bounds on the eigenvalues of the preconditioned matrices. o this end, we first recall the following theorem. Here and in the sequel, for a given square matrix A, we denote by ρ(a) the spectral radius of A. heorem 3 [5, heorem 7.7.3] Let A and B be two n n real symmetric matrices such that A is positive definite and B is positive semidefinite. hen A B if and only if ρ(a B), and A B if and only if ρ(a B) <. For the linear systems of the form () arising from liquid crystal modeling, using heorem 3, we could verify numerically that the condition A C C (or equivalently CA C ) holds true for problems of small or moderate size, and it i slikely that the condition holds for larger problems as well. Consequently, in view of (3) and (2), we conjecture that (8) and (5) can be respectively replaced by and σ(p A ) {, γ σ(p A ) } { ± iµ < µ { } [,+ρ( CA C ) ] γ } ρ(a C C) <, { } [,2). γ Indeed, if A C C (or CA C ), then ρ(a C C) = ρ( CA C ) <. 4.2 Further discussion of the properties of the preconditioned matrices In this section we investigate numerically some additional properties of the preconditioned matrices that can be used to shed light on the behavior of the preconditioned iterations.

9 Block preconditioners for a class of saddle point systems 9 Size Extreme eigenvalues of H P S P 2 λ min (H P ) λ max (H P ) able Properties of preconditioned matrices corresponding to P with γ = (left preconditioning). Size Extreme eigenvalues of H P S P 2 λ min (H P ) λ max (H P ) able 2 Properties of preconditioned matrices corresponding to P with γ = 2 (right preconditioning). In the following, we denote the symmetric and skew-symmetric parts of a preconditioned matrix (with either left or right preconditioning) by H P and S P, respectively. It is known that a Krylov subspace method like preconditioned GMRES will exhibit meshindependence convergence rates if the spectra of the matrices H P are entirely contained in an interval [a,b] with < a < b < and if S P 2 c < with a,b, and c independent of the mesh size; see, for example, [8, page 8] and references therein. We have computed the smallest and largest eigenvalues of the symmetric parts of the preconditioned matrices P A, A P, P A and A P for different values of h = k+ and different γ. We have also computed the 2-norm (spectral radius) of the skew-symmetric parts of these same matrices. In the cases corresponding to the block diagonal preconditioner P, the symmetric part of the preconditioned matrices was found to be indefinite. Hence, nothing can be concluded a priori on the behavior of preconditioned GMRES with this preconditioner. For the block triangular preconditioner P, on the other hand, we found that all the quantities of interest appear to be uniformly bounded in the case of left preconditioning with γ = ; see the results displayed in able. In the case of right preconditioning with γ = 2 we have observed a similar behavior, as shown in able 2 (even though in this case there is a very slight dependence on h, which may perhaps disappear for larger problems size). Finally, in Figure 2 we display the spectra of the symmetric parts of the preconditioned matrices for different values of γ using left and right preconditioning with P, with problem size Note that using γ = and right preconditioning leads to an indefinite preconditioned matrix. We emphasize that the above stated conditions for h-independent convergence of preconditioned GMRES are only sufficient, and not necessary. As we will see in the next section, fast, mesh-independence convergence can also be observed in cases where these conditions do not hold.

10 Fatemeh Panjeh Ali Beik, Michele Benzi Fig. 2 Spectra of H P corresponding to P for γ = (left), γ = (center) and γ = 2 (right). op: Left preconditioning. Bottom: Right preconditioning (problem size 2555). 5 Numerical experiments In this section we present the results of numerical experiments aimed at evaluating the convergence behavior of the preconditioned GMRES (PGMRES) and FGMRES methods [7] using preconditioners of the form (6). he computations were performed on a 64-bit 2.45 GHz core i7 processor with 8.GB RAM using MALAB version Right-hand sides corresponding to random solution vectors were used in all of the experiments, performing ten runs and then averaging the CPU-times and rounding iteration numbers to the nearest integer. CPU times and iteration counts are reported under CPU and Iter in the tables below. In the case of FGMRES two values are reported under Iters, namely, the number of steps for the FGMRES method and in parenthesis the total number of inner preconditioned conjugate gradient (PCG) iterations performed over all FGMRES steps. No restarting is used. In all the experiments, the initial guess was taken to be the zero vector and the (outer) iterations were stopped once b A w (k) 2 < 2 b 2, where 2 stands for the Euclidean norm and w (k) = (x (k) ;y (k) ;z (k) ) denotes the current iterate. With this stopping criterion we observe normwise relative errors in the computed solutions of order 5 or smaller. Notice that both the exact and the inexact implementations of the preconditioners require the solution of SP linear systems as subtasks. hese linear systems involve the coefficient matrices A, S = BA B, and. As already mentioned is tridiagonal in our application of interest, therefore the solution of linear systems involving is not an issue. Linear systems with A are also easy to solve, since it turns out that the Cholesky factorization of A (with the original ordering) does not incur any fill-in. Hence, we compute the Cholesky factorization A = LL at the outset, and then perform back and forward substitutions each time the action of A on a vector is required. he only potentially problematic step in applying the preconditioners is the solution of linear systems with the Schur complement S = BA B as the coefficient matrix. his matrix is full and cannot be formed explicitly. Hence, we use the PCG method, where matrix-vector products with S are performed by multiplications with the sparse matrices B and B and solving two bidiagonal linear systems involving L and L. As concerns the choice of preconditioner used for solving linear systems involving the Schur complement S = BA B, we exploit the fact that in the application considered in this

11 Block preconditioners for a class of saddle point systems paper, the matrix B has (nearly) orthogonal rows. his suggests that the matrix BAB might be a good approximate inverse of BA B. he results in able 3 confirm that the eigenvalues of the matrix BAB BA B are bounded and clustered nicely around one, independent of mesh size. Furthermore, it turns out that BAB is actually a tridiagonal matrix, hence the cost of forming, storing, and applying the preconditioner is almost negligible. In Figure 3 we present mesh plots showing the structure of the matrices BAB and (BA B ) (top) and (BAB ) and BA B (bottom). Here the problem size is 2555 and the Schur complement BA B is 5 5. It is clear from these plots that BAB is indeed very close to (BA B ) ; while the latter matrix is full, its off-diagonal entries decay so fast as to make it close to tridiagonal. It is also clear that (BAB ) is quite close to BA B. Fig. 3 Mesh plots of different matrices for the case m = p = k = 5, α =.5α c (where α c = 2.72) and β =.5. op: BAB (left), (BA B ) (right). Bottom: (BAB ) (left), BA B (right). When the block preconditioners are used with GMRES, they must be applied exactly ; i.e., all the linear systems arising as subtasks must be solved to high accuracy. For the systems involving S, where PCG is used, the inner iterations are terminated after the relative residual 2-norm has been reeduced below 2. In contrast, when they are used with FGM- RES, the block preconditioners can be applied inexactly. In this case we use a stopping tolerance of 3 for inner PCG iterations applied to systems with matrix S. Finally, we present iteration counts and CPU times for the various preconditioner and solver options. he results for GMRES preconditioned (on the right) with P are reported

12 2 Fatemeh Panjeh Ali Beik, Michele Benzi Extreme eigenvalues Problem size (m,n) λ min (BAB BA B ) λ max (BAB BA B ) 55 (3,93) (63,89) (27,38) (255,765) (5,533) (23,369) (247,64).3943 able 3 Eigenvalue information for the matrix BAB BA B. Size γ = γ = γ = 2 Iter CPU Iter CPU Iter CPU able 4 Numerical results for PGMRES with the preconditioner P for different γ (right preconditioning). in able 4. As can be seen, convergence is fast and the number of iterations even decreases as the mesh size h = k+ is reduced. he best results are obtained for γ =. Results for FGMRES with the preconditioner P applied inexactly are reported in able 5. Note that for γ = there is no significant deterioration of the rate of convergence when using the inexact preconditioner. he optimality of the PCG iterations used for the inner subtasks is shown by the fact that the total number of PCG steps decreases and appears to be settling around a constant number as h is decreased. As a result, the preconditioned FGMRES solver scales very nicely in terms of CPU time. Also, for large enough problems it is several times faster than the exactly preconditioned GMRES method. Very similar results are obtained with the block triangular preconditioner P. In ables 6 and 7 we report the results for GMRES with the exact P preconditioner for right and left preconditioning, respectively. he latter results are included to show that no significant difference is noticeable compared to the right preconditioned case, and also because of the nice clustering properties shown in able and Figure 2. he results for FGMRES and inexact application of P are shown in able 8. We observe a reduction in CPU times of nearly an order of magnitude compared to the exact variant, with the best results obtained for γ =. hese reuslts compare favorably with those recently reported in [3]. We conclude this section with a discussion of the robustness of the proposed preconditioners with respect to the parameter α which, as explained in [6], is the main control parameter. We keep the value of β unchanged (β =.5). he values of α used in these tests are given by α(k) =.5 k α c for k =,...,8 and α c = hese are physically meaningful values of α and correspnd to values used for the tests in [6]. he results are reported for the largest problem size only (n = 327,675) in Figure 4; the trend is similar for smaller problem sizes. From these results we can see that the performance of the preconditioners has a weak dependence on α, with a slight increase in the number of iterations for larger values of α. he block triangular preconditioner with γ = seems to be especially robust with respect to α.

13 Block preconditioners for a class of saddle point systems 3 Size FGMRES (γ = ) FGMRES (γ = ) FGMRES (γ = 2) Iters CPU Iters CPU Iters CPU (63).76 (43).9 3(42) (6).39 9(35).23 3(37) (54).599 8(29).35 3(35) (5).45 8(23).668 2(3) (44) (9).253 (26) (4).425 7(8).256 (2) (36) (5).4852 (2) (39) (4).47 (2).5382 able 5 Numerical results for the inexact application of the preconditioner P, different γ. Size γ = γ = γ = 2 Iter CPU Iter CPU Iter CPU able 6 Numerical results for PGMRES with the preconditioner P for different γ (right preconditioning). Size PGMRES Iter CPU able 7 Numerical results for the preconditioner P with γ = (left preconditioning). Size FGMRES (γ = ) FGMRES (γ = ) FGMRES (γ = 2) Iters CPU Iters CPU Iters CPU (3).22 6(23).95 (25) (26).248 5(9).52 (2) (25).427 5(9).295 (9) (24).795 5(9).539 (8) (2).66 5(7).9 (7) (2) (2).23 9(4) (7).66 4().48 9(4) (7).57 4().939 9(4).3257 able 8 Numerical results for the inexact implementation of the preconditioner P, different γ.

14 4 Fatemeh Panjeh Ali Beik, Michele Benzi Block triangular preconditioner Block diagonal preconditioner Fig. 4 Number of iterations for FGMRES (op) and inner PCG (Bottom) versus k; Left to right: γ =, γ = and γ = 2. 6 Conclusions In this paper we have studied two block preconditioners for a class of linear systems arising from the modeling of liquid crystals directors. Bounds for the eigenvalues of the preconditioned matrices have been established, and numerical experiments with both exact and inexact variants of the preconditioners have been performed. he inexact variants, used together with FGMRES, result in rapid convergence independent of mesh size, leading to virtually optimal solvers for the application of interest. Acknowledgements We would like to thank Alison Ramage for providing the test problems used in the numerical experiments. he first author is grateful for the hospitality of the epartment of Mathematics and Computer Science at Emory University, where this work was completed. References. J. H. Adler,. J. Atherton,. R. Benson,. B. Emerson, and S. P. MacLachlan, Energy minimization for liquid crystal equilibrium with electric and flexoelectric effects, SIAM J. Sci. Comput., 37, S57 S76 (25) 2. J. H. Adler,. J. Atherton,. B. Emerson, and S. P. MacLachlan, An energy-minimization finite-element approach to the Frank Oseen model of nematic liquid crystals, SIAM J. Numer. Anal., 53, (25) 3. F. P. A. Beik and M. Benzi, Iterative methods for double saddle point systems, SIAM J. Matrix Anal. Applic., 39, (28) 4. M. Benzi, G. H. Golub, and J. Liesen, Numerical solution of saddle point problems, Acta Numer., 4, 37 (25) 5. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK (985) 6. A. Ramage and E. C. Gartland, Jr., A preconditioned nullspace method for liquid crystal director modeling, SIAM J. Sci. Comput., 35, B226 B247 (23) 7. Y. Saad, Iterative Methods for Sparse Linear Systems. Second Edition, Society for Industrial and Applied Mathematics, Philadelphia (23) 8. V. Simoncini and. B. Szyld, Recent computational developments in Krylov subspace methods for linear systems, Numer. Linear Algebra Appl., 4, 59 (27)

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

1. Introduction. In this paper we consider iterative methods for solving large, sparse, linear systems of equations of the form

1. Introduction. In this paper we consider iterative methods for solving large, sparse, linear systems of equations of the form SIAM J. MARIX ANAL. APPL. Vol. 39, No. 2, pp. 902 921 c 2018 Society for Industrial and Applied Mathematics IERAIVE MEHODS FOR DOULE SADDLE POIN SYSEMS FAEMEH PANJEH ALI EIK AND MIHELE ENZI Abstract. We

More information

ITERATIVE METHODS FOR DOUBLE SADDLE POINT SYSTEMS

ITERATIVE METHODS FOR DOUBLE SADDLE POINT SYSTEMS IERAIVE MEHODS FOR DOULE SADDLE POIN SYSEMS FAEMEH PANJEH ALI EIK 1 AND MIHELE ENZI Abstract. We consider several iterative methods for solving a class of linear systems with double saddle point structure.

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS MICHELE BENZI AND ZHEN WANG Abstract. We analyze a class of modified augmented Lagrangian-based

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Regularized HSS iteration methods for saddle-point linear systems

Regularized HSS iteration methods for saddle-point linear systems BIT Numer Math DOI 10.1007/s10543-016-0636-7 Regularized HSS iteration methods for saddle-point linear systems Zhong-Zhi Bai 1 Michele Benzi 2 Received: 29 January 2016 / Accepted: 20 October 2016 Springer

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

c 2011 Society for Industrial and Applied Mathematics

c 2011 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 33, No. 5, pp. 2761 2784 c 2011 Society for Industrial and Applied Mathematics ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER STOKES

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 6, No., pp. 377 389 c 004 Society for Industrial and Applied Mathematics SPECTRAL PROPERTIES OF THE HERMITIAN AND SKEW-HERMITIAN SPLITTING PRECONDITIONER FOR SADDLE POINT

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Block triangular preconditioner for static Maxwell equations*

Block triangular preconditioner for static Maxwell equations* Volume 3, N. 3, pp. 589 61, 11 Copyright 11 SBMAC ISSN 11-85 www.scielo.br/cam Block triangular preconditioner for static Maxwell equations* SHI-LIANG WU 1, TING-ZHU HUANG and LIANG LI 1 School of Mathematics

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure:

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure: SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 1, pp. 20 41 c 2004 Society for Industrial and Applied Mathematics A PRECONDITIONER FOR GENERALIZED SADDLE POINT PROBLEMS MICHELE BENZI AND GENE H. GOLUB Abstract.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES INCOMPLEE FACORIZAION CONSRAIN PRECONDIIONERS FOR SADDLE-POIN MARICES H. S. DOLLAR AND A. J. WAHEN Abstract. We consider the application of the conjugate gradient method to the solution of large symmetric,

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

A new iterative method for solving a class of complex symmetric system of linear equations

A new iterative method for solving a class of complex symmetric system of linear equations A new iterative method for solving a class of complex symmetric system of linear equations Davod Hezari Davod Khojasteh Salkuyeh Vahid Edalatpour Received: date / Accepted: date Abstract We present a new

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Parallel Algorithms for Four-Dimensional Variational Data Assimilation

Parallel Algorithms for Four-Dimensional Variational Data Assimilation Parallel Algorithms for Four-Dimensional Variational Data Assimilation Mie Fisher ECMWF October 24, 2011 Mie Fisher (ECMWF) Parallel 4D-Var October 24, 2011 1 / 37 Brief Introduction to 4D-Var Four-Dimensional

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

7.4 The Saddle Point Stokes Problem

7.4 The Saddle Point Stokes Problem 346 CHAPTER 7. APPLIED FOURIER ANALYSIS 7.4 The Saddle Point Stokes Problem So far the matrix C has been diagonal no trouble to invert. This section jumps to a fluid flow problem that is still linear (simpler

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Some Preconditioning Techniques for Saddle Point Problems

Some Preconditioning Techniques for Saddle Point Problems Some Preconditioning Techniques for Saddle Point Problems Michele Benzi 1 and Andrew J. Wathen 2 1 Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia 30322, USA. benzi@mathcs.emory.edu

More information

Modified HSS iteration methods for a class of complex symmetric linear systems

Modified HSS iteration methods for a class of complex symmetric linear systems Computing (2010) 87:9 111 DOI 10.1007/s00607-010-0077-0 Modified HSS iteration methods for a class of complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 20 October 2009 /

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations Filomat 32:9 (2018), 3181 3198 https://doi.org/10.2298/fil1809181a Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Delayed Over-Relaxation

More information

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems Zeng et al. Journal of Inequalities and Applications 205 205:355 DOI 0.86/s3660-05-0879-x RESEARCH Open Access Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

1. Introduction. In this work we consider the solution of finite-dimensional constrained optimization problems of the form

1. Introduction. In this work we consider the solution of finite-dimensional constrained optimization problems of the form MULTILEVEL ALGORITHMS FOR LARGE-SCALE INTERIOR POINT METHODS MICHELE BENZI, ELDAD HABER, AND LAUREN TARALLI Abstract. We develop and compare multilevel algorithms for solving constrained nonlinear variational

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 13-10 Comparison of some preconditioners for the incompressible Navier-Stokes equations X. He and C. Vuik ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information