Frobenius Norm Minimization and Probing for Preconditioning

Size: px
Start display at page:

Download "Frobenius Norm Minimization and Probing for Preconditioning"

Transcription

1 International Journal of Computer Mathematics Vol. 00, No. 05, May 007, 1 31 robenius Norm Minimization and Probing for Preconditioning Thomas Huckle Alexander Kallischko (Received 00 Month 00x; In final form 00 Month 00x) In this paper we introduce a new method for defining preconditioners for the iterative solution of a system of linear equations. By generalizing the class of modified preconditioners (e.g. MILU), the interface probing, and the class of preconditioners related to the robenius norm imization (e.g. SAI, SPAI) we develop a toolbox for computing preconditioners that are improved relative to a given small probing subspace. urthermore, by this MSPAI (modified SPAI) probing approach we can improve any given preconditioner with respect to this probing subspace. All the computations are embarrassingly parallel. Additionally, for symmetric linear system we introduce new techniques for symmetrizing preconditioners. Many numerical examples, e.g. from PDE applications such as domain decomposition and Stokes problem, show that these new preconditioners often lead to faster convergence and smaller condition numbers. Keywords: preconditioning, Schur complements, probing, sparse approximate inverses, modified incomplete factorization AMS Subject Classification: 6510, 6550, Introduction or the iterative solution of sparse linear equations Ax = b, e.g. based on the conjugate gradient method, BiCGstab, GMRES, or QMR, the choice of preconditioner is very important (see [6]). Usually we differentiate between explicit preconditioners that are approximations on A itself like Jacobi, Gauss- Seidel, or ILU method, and approximations on A 1 such as polynomial preconditioners or robenius norm preconditioners like SPAI. Here, we are interested in developing a new class of explicit and inverse preconditioners by improving three types of preconditioners: akultät für Informatik, Technische Universität München, Boltzmannstr. 3, D Garching, Germany (huckle@in.tum.de). akultät für Informatik, Technische Universität München, Boltzmannstr. 3, D Garching, Germany (kallisch@in.tum.de).

2 Thomas Huckle and Alexander Kallischko (i) approximate inverses by robenius norm imization such as SPAI or SAI (ii) probing by choosing special subspaces where the preconditioner coincides with the action of the given matrix (iii) modifying a given preconditioner in order to improve the behaviour of the preconditioner with respect to certain subspaces, like in modified ILU. These three starting points lead to three modifications of well-known algorithms: Computing approximate inverses by robenius norm imization can be generalized, e.g. by allowing weight matrices inside the robenius norm like described in [0], or by adding side conditions in the form of additional rows to the original matrix imization problem. Here, the main task is to choose meaningful weight matrices that improve the behaviour of the preconditioner. Generalizing the probing approach by treating the probing ansatz as a regularized Least Squares (LS) problem. The regularization appears as an additional robenius norm matrix imization problem. Then a wider choice of probing vectors can be used for any type of given matrices A. Modifying a given preconditioner by finding a nearby improved matrix with better approximation property on certain subspaces. These new preconditioners can lead to better convergence in many examples and the involved computations are embarrassingly parallel. Surprisingly, these three tasks can be treated in the same way as special weighted robenius norm imization problems. In the following we will call this approach MSPAI probing. One drawback of preconditioners arising from robenius norm imization is that certain properties like symmetry or positive definiteness are not maintained. Therefore, we develop techniques for symmetrizing sparse preconditioners in different ways. These methods deliver symmetric matrices that are often also symmetric and positive definite (spd) and allow the application of more efficient iterative solvers. In the first section we give an overview of the well-known preconditioners, we are interested in. Then, we introduce a general imization approach and apply this approach on explicit and inverse preconditioners combining the probing idea and the Least Squares imization. This leads to new preconditioners for direct and inverse, resp. factorized and unfactorized approximations. urthermore, symmetrization techniques for all cases are developed. In the last sections we present numerical examples and comparisons for PDE problems such as Laplace equation and Stokes problem.

3 robenius Norm Minimization and Probing robenius Norm Minimization and SPAI The use of robenius norm imization for constructing preconditioners for sparse matrices in a static way by AM I M for a prescribed allowed sparsity pattern for M A 1 goes back to [5]. The computation of M can be split into n independent subproblems Mk AM k e k, k = 1,..., n with M k the columns of M and e k the columns of the identity matrix. In view of the sparsity of these Least Squares (LS) problems, each subproblem is related to a small matrix Âk := A(I k, J k ) with index set J k given by the allowed pattern for M k and I k the so-called shadow of J k, that is the indices of nonzero rows in A(:, J k ). These n small LS problems can be solved independently, e.g. based on QR-decompositions of the matrices Âk by using the Householder method or the modified Gram Schmidt algorithm. This approach has been improved and modified in different ways. Cosgrove, Díaz, Griewank, as well as Grote, Huckle and Gould, Scott introduced dynamic methods for capturing an efficient sparsity for M automatically (see [10, 13, 15]). Chow showed in [8] ways to prescribe an efficient static pattern a priori. Holland, Shaw, and Wathen have generalized this ansatz allowing a target matrix on the right side in the form M AM B in connection with some kind of two-level preconditioning: irst compute preconditioner B for A and then improve this preconditioner by a robenius norm imization (see [16]). rom the algorithmic point of view the imization with target matrix B instead of I introduces no additional difficulties. Only the pattern of M should be chosen more carefully with respect to A and B. Kolotilina and Yere [0] generalized the method to compute factorized approximations and also allowing weight matrices inside the robenius norm, M AM I W = W 1/ (AM I) = tr((am I)W (AM M M I)T ) (see also [19]). Huckle introduced in [17] dynamic versions of these factorized approximations. The factorized approximation in the spd case again can be computed as the solution of a robenius norm imization problem: or A = L T A L A solve L L A L I with normal equations A(J k, J k )L k (J k ) = L A,kk e k (J k ) for computing the columns of L. Here, in a first step the unknown main diagonal entry L A,kk is replaced by 1; this allows the computation of a factor L by solving n small linear systems. In a second step L can be optimized by a diagonal scaling with diag(d(l T AL)D) = I. The approximate Cholesky factor for A 1 is then given by L D := LD. An advantage of all these methods is that they can be used as black box

4 4 Thomas Huckle and Alexander Kallischko methods and they are easy parallelizable. But often the approximation quality is not sufficient. urthermore, the symmetry is only preserved in the factorized approach. 1. Probing The probing technique was introduced for preconditioning interface matrices in domain decomposition methods (see Chan in [7] or Axelsson [3] or more recently Siefert and Sturler in []). Here, the interface matrix is a Schur complement S. As preconditioner for S one defines e.g. a band matrix M that shows the right behaviour on certain subspaces or probing vectors e j, thus Me j = Se j, j = 1,..., k. As probing vectors one has to choose very special vectors, e.g. e 1 = (1, 0, 0, 1, 0, 0, 1,...) T, e = (0, 1, 0, 0, 1, 0, 0, 1,...) T, and e 3 = (0, 0, 1, 0, 0, 1,...) T. This choice results in simple equations for computing the preconditioner M: m m 11 m 11 m m 1 m m 3 m 3 m 33 m m = 1 m m 3 m 34 m 3 m 33 m 44 m 45 m 43 (1) rom this equation one can read off the entries of M by comparison with the prescribed vectors Me j = Se j. Note, that the resulting M will be not symmetric and that the probing equation Me j = Se j will not be satisfied in view of the band structure of M, e.g. the first component in Se 3 may be not zero. In the special case of only one probing vector and M a diagonal matrix it holds M = diag(se). Axelsson and Polman have improved this approach by introducing the method of action in [3] which in some cases leads to a spd preconditioner. The advantage of these methods is that without explicit knowledge of the problem it is possible to obtain a preconditioner that imitates the behaviour of the original problem on certain subspaces. The disadvantages are, that one has to choose special probing vectors that lead to an easy to solve system for M; furthermore it is not easy to include special properties like symmetry. 1.3 Modified Incomplete actorizations The modified incomplete factorizations were introduced by Gustafsson (see [14]) and by Axelsson [1] as an improvement of the ILU algorithm. In the ILU method the Gaussian eliation process is restricted to a certain sparsity

5 robenius Norm Minimization and Probing 5 pattern. Therefore, new entries that appear on certain not allowed positions in U are deleted. The result is an approximate factorization A = LU +R, where R contains the neglected entries. The aim of the modified approach is again to imitate the action of the given matrix on a certain subspace. In this case the subspace is generated by the vector of all ones. In PDE examples this vector plays an important role in the sense that the discretized problem leads to a matrix where A (1, 1,..., 1) T is zero upto the boundary conditions. Hence, this vector can be used as an approximation of the subspace to small eigenvalues. If we force the preconditioner to coincide with the given problem on this subspace we expect to improve the condition number of the preconditioned system. In order to maintain the action of the preconditioner relative to the vector of all ones, we try to preserve the row sum of the triangular factors in the ILU method. Therefore, in the MILU algorithm we do not delete the entries on notallowed positions but we add them to the related main diagonal entry, therefore maintaining the row sum. or PDE problems such as the D Laplacian this MILU improves the condition number significantly. As a disadvantage, this approach is restricted to the vector of all ones, and the method is not very robust for more general problems. Generalized orm of robenius Norm Minimization and Probing As we have seen in the previous sections it seems to be desirable to introduce a probing or modified approach for more general problems. With this aim, we consider a SPAI-like robenius norm imization in the most general form. This new form allows the approximation of an arbitrary rectangular matrix B replacing the identity matrix I in SPAI form AM I (compare the target matrix SPAI form in [16]). urthermore we replace A by any rectangular matrix C. Here, B and C should be sparse matrices, but we allow also a small number of dense rows in C and B. The matrix M that we want to compute has a prescribed sparsity pattern (it is also possible to update the sparsity pattern dynamically, e.g. by a SPAI-like approach, but we do not consider this dynamic method here). Technically, this causes no difference in the implementation compared with the original SPAI algorithm. Thus, for given matrices C, B R m n we want to find a matrix M R n n by solving the MSPAI imization M CM B. () Analoguously to SPAI-type algorithms, we can solve the imization () completely in parallel, because we can still compute M column-wise. Another property inherited from SPAI is the ability to prescribe an almost arbitrary

6 6 Thomas Huckle and Alexander Kallischko sparsity pattern for M. Note, that a meaningful pattern for M should take into account the sparsity structure of C and of B as well. This new approach also allows one to consider lower/upper triangular matrices for M, C, and B. As the main advantage of this general formulation we can add further conditions to a given robenius norm imization. This allows the inclusion of probing constraints in the general computation of a preconditioner. Consider the given imization problem C 0M B 0 M e.g. with C 0 = A and B 0 = I for approximating A 1, or C 0 = I and B 0 = A for approximating A. Then for any given set of probing vectors collected in the rectangular matrix e T we can add probing conditions M e T (C 0 M B 0 ) = M g T M h T with g T := e T C 0 and h T := e T B 0 in the form M ( C0 ρg T ) ( ) B0 M ρh T, 0 ρ R (3) with a weight ρ. The number ρ deteres whether we want to give more weight to the original norm imization or the additional probing condition. The k probing vectors which are stored in e R n k are added row-wise to the corresponding matrices C 0 and B 0 in form of the matrices g, h R n k. This can be seen as a generalization of a robenius norm approximation which tries to improve the preconditioner on a certain subspace given by e. Just as well this can be seen as a regularization technique for a general probing approach. In standard probing we can only consider probing vectors and sparsity patterns of M that lead to easy to solve linear systems for computing the entries of M from the probing conditions. In this new approach we can choose any probing vectors e and sparsity pattern of M that also lead to under- or overdetered linear systems because of the imbedding of the probing method into the general matrix approximation setting. In the given original problem M C 0 M B 0 we could also replace the robenius norm by any other norm, e.g. by adding a weight matrix (as considered in [0]) M W (C 0 M B 0 ). By choosing as weight the rectangular matrix W := ( ) I ρe T we end up with the same generalized MSPAI norm imization (3). (4)

7 robenius Norm Minimization and Probing 7.1 Sparse Approximate Inverses and Probing As a first application, we can use the method described above to add some probing information to a SPAI, i.e. unfactorized approximation of the inverse A 1 of a given matrix A R n n : We choose C 0 = A in (3), and for B 0 we take the n-dimensional identity matrix I. Here, h becomes the given probing vector e, and g = e T A. With that, we obtain the MSPAI robenius norm imization M ( A ρe T A ) ( ) I M ρe T = M W (AM I) (5) with weight matrix (4). The result of this method is an approximation M to the inverse A 1, which fulfills g T = e T A M A 1 = g T M e T. In many applications the matrix A is not given explicitly but only via a sparse approximation Ã. Nevertheless, we assume that we are able to compute gt = e T A exactly. In this situation, we modify the above imization to M ( à ρe T A ) ( ) I M ρe T. (6) Then probing is used to derive a sparse approximate inverse M for à A with respect to the exact probing conditions e T AM = e T, where à is an approximation of the original matrix A.. Explicit Approximation and Probing With our approach, we can not only add constraints to the approximation of the inverse A 1 of a given matrix, but also to explicit approximations of A. This can be used if A is (nearly) dense in order to get a sparse approximation on A. In contrast to the last section, we choose C 0 = I and B 0 = A with probing vector g = e. Consequently, we choose h T = e T A. Altogether, this yields M ( I ρe T ) ( ) A M ρe T A = M W (M A) (7)

8 8 Thomas Huckle and Alexander Kallischko with the above weight matrix W (4). Now, we get a sparse approximation M of A which also tries to reflect the properties of A on certain vectors: h T = e T A e T M. In many cases, A is again only given implicitly or (nearly) dense, and we have to use a sparse approximation Ã. This leads to the MSPAI problem M ( I ρe T ) ( ) Ã M ρe T A. (8) Note, that this explicit approach makes sense only if linear equations in M can be solved easily, as the inverse of M has to be used for preconditioning..3 Symmetrization in the Unfactorized Case or the SPAI algorithm as well as the generalized problem () the solution matrices M are usually nonsymmetric. Nevertheless, for (positive definite) symmetric matrices A we would like to have a (positive definite) symmetric preconditioner, so it is still possible to use e.g. the preconditioned conjugate gradient method (pcg). Otherwise, one has to rely on iterative methods for nonsymmetric problems, such as BiCGstab, GMRES, or QMR which have higher computational costs. To overcome this problem, we present two symmetrization techniques, e.g. for the preconditioner M of the previous two sections. The first approach is based on MSPAI robenius norm imization. In the second approach we combine two iteration steps writing the resulting iteration as one step with a symmetric preconditioner..3.1 Symmetrization by robenius Norm Minimization. irst, we consider M derived by the MSPAI imization relative to symmetric matrices C 0 and B 0, and a set of general probing vectors in e. We want to derive a symmetrization of M with a similar sparsity pattern. We start with the trivial symmetrization M := M + M T. We can improve this matrix by returning to the initial imization problem allowing additional changes on the main diagonal entries. Hence we define ˆM := α ( M + M T ) + D = α M + D (9)

9 robenius Norm Minimization and Probing 9 with a scaling factor α R and a diagonal correction D R n n. Obviously, ˆM is symmetric by construction. We detere α and D in two steps. In order to find an optimal value for α, we insert α M into the basic problem (3): α ( ) C0 ρg T }{{} :=C ) }{{} ( α M B0 ρh T :=B = α αc M B. (10) or the analytic solution, we consider the inner product (denoted as.,. ), which for general A and B is defined by the robenius norm: A, B = AB = tr ( A T B ). (11) In these imizations, the trace of the product of two matrices can be evaluated by sparse dot products and only a few possibly dense dot products: tr (AB) = N a T j b j (1) with the j-th row a j of A and b j the j-th column of the matrix B. With formula (11), we can solve (10) for α using standard properties of the inner product. In a following step we can detere the optimal diagonal matrix D by standard robenius norm imization. Note that these computations are fast because of the sparsity of the underlying matrices and the low rank of the possibly dense submatrices g and h. But in general, this symmetrization technique does not necessarily yield a positive definite preconditioner. j=1.3. Symmetrization by Combining two Basic Iteration Steps. A second approach for a symmetrization of M considers two basic iterative steps one with preconditioner M, the other with M T. Here, we consider M as approximation of the inverse of A, so the first method is related to the iteration matrix I AM, the second to I AM T. The two consecutive steps are described by the iteration matrix (I AM T ) (I AM) = I AM T AM + AM T AM = I A( M M T AM).

10 10 Thomas Huckle and Alexander Kallischko Hence, the resulting symmetric preconditioner is given by M M T AM. To add an additional degree of freedom we consider the damped iteration with preconditioner αm. This leads to (I αam T ) (I αam) = I αam T αam + α AM T AM and the symmetrized preconditioner = I αa( M αm T AM) ˆM α := M αm T AM. (13) This preconditioner is denser than M, but may lead to a reduction of the error comparable with two steps based on αm. To be sure that the two iteration steps lead to an improved approximate solution we need I αam < 1: Theorem.1 Let us assume that λ(am) > 0 holds for all eigenvalues of AM. Then for 0 α < λ max (AM) we get I αam < 1 in the spectral norm. The optimal choice of α that leads to a imum norm of I αam is given by α = λ max (AM) + λ (AM) with I αam λ max (AM) λ (AM) λ max (AM) + λ (AM) < 1. Remark 1 To derive this optimal α, one needs approximations for the extreme eigenvalues of AM. This symmetrization with M and ˆM α does not include an additional probing condition. ortunately, we can show, that this symmetrization process preserves the probing property for slightly modified preconditioners: Theorem. Assume that M is a preconditioner that satisfies the probing condition e T AM = e T exactly. or M := M + M T αmam T with α < (e.g. α = 0 and M) the Rayleigh quotient for vector Ae is given by (e T A) M(Ae) = ( α)(e T A)A 1 (Ae) = ( α)(e T A)M(Ae),

11 robenius Norm Minimization and Probing 11 and therefore the range of values of the preconditioned matrix with M is nearly unchanged relative to the probing space. Proof It holds (e T A) M(Ae) = (e T A)(M + M T αmam T )(Ae) = e T Ae + e T Ae αe T Ae = ( α)e T Ae, ( α)e T Ae = ( α)(e T A)A 1 (Ae), ( α)e T Ae = ( α)(e T AM)Ae = ( α)(e T A)M(Ae). Remark or α = 1, M satisfies the probing condition exactly: e T A M = e T A(M + M T αmam T ) = e T + e T AM T αe T AM T = e T..3.3 The Case of Symmetric Positive Definite Matrices. Until now we only derived symmetric preconditioners. But often for spd A we need a spd preconditioner. So now we consider only spd matrices A. If M is symmetric indefinite the above methods will not lead to a spd preconditioner. So let us assume that M = M + M T is positive definite. If this is not true we could replace M by M + βi with small β. Note, that for a good approximation M it should hold that M is nearly positive definite, and therefore we can hope to find such a small β > 0. Proposition.3 Let A and M be symmetric positive definite. Then ˆM α = M αm T AM will also be positive definite as long as α < λ ( M, M T AM), the imum eigenvalue of the generalized positive definite eigenvalue problem M = λm T AM. Remark 3 Therefore, we have to choose α such that it satisfies the inequalities { } α λ max (AM) + λ (AM), λ ( M, M T AM) =: α opt. We can derive a deeper analysis by considering the preconditioner M α MA M/4 that is the result of applying the above symmetrizing construction on M/. Theorem.4 Let A and M be spd. Then, for M α MA M/4 the optimal α is given by α = /(λ max (A M/) + λ (A M/)). With this choice we get a new improved condition number of the preconditioned system. The condition

12 1 Thomas Huckle and Alexander Kallischko is improved by a factor λ + µ 4λ 1 4. Proof The imum eigenvalue of the preconditioned system is given by λµ λ+µ and the maximum eigenvalue by λ(λ+µ) (λ+µ) with λ and µ the maximum, resp. imum eigenvalues of MA/. Note, that we can also sparsify M αm T AM carefully in order to avoid a preconditioner that would be too dense. urthermore, we can sometimes save costs in computing ˆM α or A ˆM α. In SPAI for example we have already computed R = I AM and can use this information, e.g. in the form M + M T αm T AM = M + M T (I αam) = M + M T (1 α + αr). In case we want to symmetrize a preconditioner M that is an approximation of A itself based on AM 1, we can apply the same method on M 1 and get M 1 + M T αm T AM 1 (14) as a symmetric approximation for A. When M is the upper or lower triangular part of A the Gauss-Seidel-preconditioner this symmetrization approach is closely related to SSOR-preconditioners. In general, we can apply this symmetrizing method if AM has only positive eigenvalues, and for spd A we can derive a spd preconditioner if M is spd..3.4 Numerical Examples. The effectiveness of these simple symmetrization techniques emerges in the following small examples. or the case of explicit Table 1. Condition numbers for no preconditioning I, unsymmetrized preconditioner M, M, ˆM and ˆMα employed as explicit approximative preconditioners AM 1. n I M M ˆM ˆMα approximations of A, we choose an A := LL T, where L comes from an IC(0) factorization of the 5-point discretization of the D Laplacian, and therefore is of block tridiagonal structure. This example is completely artificial and its only purpose is to demonstrate the result of a symmetrization step. We probe it

13 robenius Norm Minimization and Probing 13 with e = (1,..., 1) T and ρ = 0 with a tridiagonal prescribed sparsity pattern and obtain an M. This M is then symmetrized both to ˆM using the α M + D approach (9) and to ˆM α from (13). As shown in table 1 this ansatz yields symmetric preconditioners with nearly unchanged condition numbers which can then be employed in iterative solvers for spd matrices, e.g. the pcg method. urthermore, in this example ˆM α leads to a condition number improved by nearly a factor 1/4 as expected. Note, that in the explicit approximation we use M 1 in the symmetrization via ˆM α (14). In all examples the simple and cheap symmetrization (M + M T )/ is sufficient to obtain a symmetric preconditioner of the same quality as the original nonsymmetric M. Hence, the costly computation of an optimal α and D in ˆM is often unnecessary. or Table. Condition numbers for no preconditioning I, unsymmetrized preconditioner M, M, ˆM and ˆMα employed as approximative inverse preconditioners AM. n I M M ˆM ˆMα the symmetrization of an approximate inverse preconditioner, we compute M by SPAI applied on the D Laplacian with the static pattern of A. The condition numbers of the symmetrized versions are shown in table. Note, that in this second example, we do not apply probing. We only want to display the symmetrization aspect. Again the cheap symmetrization M is sufficient, and ˆM α leads to a reduction of nearly 1/4..4 Explicit actorized Approximation and Probing The MSPAI probing approach does not only allow one to improve unfactorized methods, but also preconditioners which are computed in a factorized form such as ILU or incomplete Cholesky. Here, we assume an already computed factored approximation A LU, e.g. given by ILU or the Gauss-Seidel method. The aim is to improve the factors with respect to the given probing conditions. Thereby, we keep one factor (L or U) fixed, and recompute the other factor. That means, we set in (3) B 0 = A, C 0 = L and for the probing constraints g T = e T L, h T = e T A. In order to get an upper (respectively lower) triangular factor, we restrict the pattern for M, e.g. to upper triangular Ũ. Altogether, we solve Ũ ( L ρe T L ) ( ) A Ũ ρe T A = Ũ W (LŨ A). (15)

14 14 Thomas Huckle and Alexander Kallischko Having computed this improved version Ũ of U, we consider A LŨ with Ũ fixed. We obtain the ansatz for the improved lower triangular factor L through transposition: L ( ) Ũ T ρe T Ũ T ( ) L T A T ρe T A T = L W (Ũ T LT A T ). The probing constraints have to be modified as well to g T = e T Ũ and h T = e T A T. The result of these probing steps are the improved factorization A LŨ, and A LŨ. Note, that the initial factorization A LU can be produced by any preconditioning technique, which yields factorized approximations of A. As an important special case, we have to deal with the problem that A may not be given explicitly, but only via an approximation Ã. We assume that we can compute the exact values for e T A and e T A T. Then we have to modify the imization to the form resp. L Ũ ( L ρe T L ( ) Ũ T ρe T Ũ T ) ( ) Ã Ũ ρe T A ( ) L T ÃT ρe T A T, (16). (17).5 Approximating a actorization of A 1 In contrast to the section above, we start with a given sparse factorized approximation of the inverse A 1 UL or UAL I. U and L may be the result of some algorithm like AINV or SPAI. Again, we add the probing conditions and set in formula (3) from above B = I, C = UA, g T = e T UA and h T = e T. This gives the following MSPAI probing problem: L ( ) UA ρe T UA ( ) I L ρe T = L W (UA L I). (18) To ensure, that L has lower triangular sparsity pattern, we restrict the pattern to that form. Again, the result is an improved new factor L. As in the

15 robenius Norm Minimization and Probing 15 explicit factorized case before, we can apply the same method once again on the transposed problem, in order to obtain an improved Ũ, too. or unknown A we again have to use an approximation à and arrive at L ( ) U à ρe T UA ( ) I L ρe T. (19).6 Symmetrization for actorized Approximations If the given A is symmetric positive definite, we can compute a symmetric factorization based on e.g. incomplete Cholesky or SPAI, resulting in a triangular matrix L and preconditioned system A LL T or L T AL I. Then with the methods of the previous two sections we can add probing conditions and replace, e.g. L T by another factor, e.g. Ũ =: L T for fixed L. But then we lose symmetry because we have two different factors L and Ũ for the preconditioner LŨ. To regain symmetry we can set L α := L α (L, L) := L + α( L L) (0) as convex combination of both factors, α [0, 1]. An optimal α can be computed by substituting L α into the original MSPAI imization problem resp. α W (L α L T α A), (1) α W (L T αal α I) () in the approximative inverse case. This leads to a polynomial of degree 4 in α of the form W R + αw H + α W K = tr(rw T W R) + αtr(hw T W R) + or (1) the matrices H, K, and R are given by + α tr(hw T W H + KW T W R) + + α 3 tr(kw T W H) + α 4 tr(kw T W K). (3) R := LL T A, K := ( L L)( L L) T, H := ( L L)L T + L( L L) T,

16 16 Thomas Huckle and Alexander Kallischko and for () by R := L T AL I, K := ( L L) T A( L L), H := ( L L) T AL+L T A( L L). We can compute the ima of these polynomials and choose for α the solution with imum norm. Therefore we only have to compute the trace of products of sparse matrices. Note, that if in L α both L and L satisfy the probing condition, then also L α satisfies the probing condition. The substitute matrix K contains the difference L L in second order. This difference is supposed to be quite small by construction. So, we can simplify (3) by dropping the coefficients which contain K. The result is a polynomial in α of second order: W R + αw H + α W K tr(rw T W R) + αtr(hw T W R) + + α tr(hw T W H). (4) Equation (4) has one unique imum which can be computed directly by the derivative with respect to α and evaluation of one trace and one robenius norm. Due to the structure of W, the evaluation of the traces is quite cheap. or instance, tr(hw T W R) can be computed exploiting W T W = I + ρ ee T and properties of the trace: tr(hw T W R) = tr(hr + ρ Hee T R) = tr(hr) + ρ tr((e T R)(He)). We compute both summands without evaluating the matrix-matrix products using (1). Note that it is not advisable to add an additional diagonal correction in L because this would result in the imization of a function of degree 4 in the diagonal entries d 1,..., d n. But it is possible to add a diagonal correction by neglecting the probing part and choosing the diagonal correction only with respect to the matrix approximation problem, in order to generate all ones on the main diagonal positions. So after having computed L like above, we can choose a diagonal D by diag(dl T ALD) = (1,..., 1) (5) and replace L by L D := LD. or unknown A we again have to replace the weight matrix W and consider the imization α ( L α L T α Ã ρe T L α L T α e T A ) or α ( ) L T α ÃL α I ρe T L T αal α e T

17 robenius Norm Minimization and Probing 17 in order to detere the optimal value for α. So, let us assume, that starting with a factor L we have generated an approximation Ũ by probing. With fixed Ũ we can compute a new factor L by the transposed imization. Then basically we have the following choices for symmetrizing one or two approximate factors: take factors Ũ and Ũ T, e.g. A Ũ T Ũ or ŨAŨ T I. take the symmetrization relative to L and Ũ, either by exact imization based on the polynomial of degree 4, denoted by L α,4 (L, Ũ T ), or by approximate imization denoted by L α, (L, Ũ T ) use L and L T as factorization use L α,m ( L, Ũ T ) with exact (m = 4) or approximate (m = ) imization..6.1 Numerical Examples. The MSPAI symmetrization techniques described in the previous section yield factorized preconditioners, where symmetry is regained. The following examples demonstrate that the symmetrization does not deteriorate the condition number of the system. As an example, we Table 3. Condition numbers for no preconditioning I, L from IC(0), Ũ by probing L, Ũ employed together with L, for L α = L + α(ũ T L) with approximated and exact polynomial as explicit factorized preconditioners. n I L L, Ũ L α,(l, Ũ T ) L α,4 (L, Ũ T ) take again a D Laplacian for different dimensions n. Then, we compute an incomplete Cholesky factor for the explicit case and a static SPAI with the pattern of A for inverse preconditioning. Then we apply probing and use the different symmetrization approaches in order to have a preconditioner, which we can use for iterative solvers for symmetric problems. or the explicit ap- Table 4. Condition numbers for no preconditioning I, SPAI L, probed SPAI L employed together with L, for L α = L + α( L L) as inverse approximative preconditioners. n I L T AL L T A L L T α,4 AL α, proximation, we choose ρ = 50 and e = (1,..., 1) T as probing parameters and

18 18 Thomas Huckle and Alexander Kallischko combine the incomplete Cholesky factor with the one obtained from probing. To do so, we apply convex symmetrization (0) using the exact and the approximative polynomials. Table 3 shows that the condition numbers are preserved throughout symmetrizing. To save costs we should use only one step of probing and possibly one additional step of approximate imization L α,. or the inverse case with the SPAI, we set ρ = 7 and the eigenvector which corresponds to the smallest eigenvalue as probing vector. Because even quite rough approximations are sufficient here, the computation of this probing vector is not too costly. The resulting condition numbers are shown in table 4. In other computations not reported, condition numbers remained essentially unchanged when symmetrized versions were used..7 Application to Schur Complements The MSPAI probing approach is especially interesting for preconditioning Schur complements. Therefore, we consider the robenius norm imization method in this case more deeply. Given a matrix ( ) A B H = C D we consider preconditioning the Schur complement S D = D CA 1 B. In a first step, we need approximations of S D which avoid the explicit computation of this matrix. To this aim we can use different methods, e.g. factorized SPAI or SPAI for approximation A 1 by some M and computing the sparse approximation S D = D C MB, or using SPAI with target matrix for a sparse approximation of A 1 B by M AM B which again leads to a sparse approximation for the Schur complement. Based on S D we can use the probing 1 S D approach to define a sparse approximation on S D or which is improved with respect to a collection of probing vectors relative to the exact Schur complement. A second approach can de derived by observing that the left lower block in the inverse of the given matrix H is the inverse of the Schur complement S D. It holds ( H 1 = S 1 A D 1 CS 1 A A 1 BS 1 D S 1 D ).

19 robenius Norm Minimization and Probing 19 Therefore, we can modify the general probing approach to M B,M D A B ( ) C D MB 0 I 0 ρe T M S D D ρe T. (6) Then the computed M D gives an approximation to S 1 D, because the last columns of H 1 are approximated by M B and M D. We can formulate another method for computing M D based on the weight matrix W. By using the equation ( ) A B ( ρu T ρe T ) C D ( A 1 BS 1 ) S 1 D D = ρe T CA 1 BS 1 D +ρet DS 1 D = ρet, we arrive at the probing imization M B,M D ( I ρu T ρe T ) [( ) ( ) A B MB C D M D ( )] 0 I, (7) again with M D as approximate inverse for the Schur complement. Note, that for u T = e T CA 1 (6) and (7) are identical. Hence, problem (7) is more general, but in the following we will only consider (6)..8 Choice of the Probing Vectors The choice of meaningful probing vectors is essential. This question is closely related to the origin of the given problem and further a priori knowledge may be very helpful. Possible choices considered in this paper are: kp = 0: or given k we define e m (j) = 1 for j = m, k + m, k + m,..., and m = 1,..., k. Then, we normalize the vectors e m to be of length 1. or k = 1 this leads to the probing vector e = (1, 1,..., 1) T / n, and for k = to e = (e 1, e )/ n/ with e 1 = (1, 0, 1, 0, 1, 0,...) T and e = (0, 1, 0, 1, 0, 1,...) T. This choice is motivated by PDE examples and MILU, and by the usual probing vector approach. kp = 1: k vectors from a set of orthogonal basis vectors, e.g. ( ( )) πjm sin n + 1 (n + 1) j=1,...,n for m = 1,..., k. (8)

20 0 Thomas Huckle and Alexander Kallischko This is motivated by the close relation between the Sine Transform and many examples from PDE or image restoration. Similarly, for D problems we can define the Kronecker product of these 1D vectors. kp = : computing k eigenvector approximations relative to the largest and/or smallest eigenvalues (or singular vectors). This is some kind of black box approach when there is no a priori knowledge at hand. In this case, we have additional costs for computing eigenvector approximations. But usually very rough and cheap approximations are sufficient. In the following we choose eigenvectors to the smallest eigenvalues only. The weight ρ used in the probing robenius norm imization is usually chosen ρ > 1. In cases where it is well known that the standard probing approach works well, large values for ρ lead to good preconditioners. So the choice of ρ indicates whether we consider the given problem as a regularized Least Squares problem related to given probing vectors (large ρ) or whether the imization is used for a slight modification of a given preconditioner (ρ close to 1). Sometimes it is impossible to satisfy the probing condition sufficiently. or the unfactorized or factorized approximate inverse approach, the probing vector e T = (1, 1,..., 1) T leads to a sparse vector g T = e T A, e.g. for the discretization of the Laplace operator or other elliptic PDE problems. Therefore, for sparse pattern of M, the vector g T M will be also sparse and cannot be a good approximation to the given dense vector e. In such cases the above approach is only efficient for approximating A itself. or A 1 one has to choose ρ carefully. Otherwise a slight improvement in the probing condition by choosing a much larger ρ will spoil the matrix approximation and result in a bad preconditioner. 3 Numerical Examples for MSPAI Probing Now, we want to demonstrate our methods in several applications. All the following numerical examples were computed in MATLAB. If iteration counts are given, we used the MATLAB pcg with zero starting vector, random righthand sides and a relative residual reduction of 10 6 as stopping criterion. 3.1 MSPAI Probing for Domain Decomposition Methods Here, we consider the 5-point discretization of the D elliptic problem u xx + ɛu yy on a rectangular grid; usually we consider the isotropic problem with ɛ = 1. We partition the matrix such that it has the dissection form

21 robenius Norm Minimization and Probing 1 Table 5. Condition numbers for domain decomposition Laplace with explicit approximate probing for k = 3 and kp = 0. m dim(a) dim(s) cond(sm 1 ) cond(sm 1 ρ=0 ) ρ cond(sm 1 ρ ) A 1 1 A.... A m m G 1 G G m A m+1 e.g. by a domain decomposition method with m domains. To reduce the linear system in A to the smaller problems A 1,... A m, we need the Schur complement S = A m+1 G 1 A m A 1 m G m. Here, S will in general be a dense matrix. To avoid the explicit computation of S we seek sparse approximations S. In a first step we can detere Ãi,inv approximations for A 1 i, e.g. by SPAI, and then compute S = A m+1 1 A 1,inv G 1... m A m,inv G m. Now we try to modify this approximation with respect to some probing vectors in order to improve the condition of S 1 S. irst we use the explicit approximation from section.. In tables 5 and 6 we compare the resulting explicit preconditioners for different choices of ρ and the probing vectors. Again, we denote with kp = 0, 1, the method of probing vector as stated in section.8. In a first step we approximate the matrices A 1 i by sparse approximate inverses M i with the pattern of A i. Then, we probe the resulting S keeping the sparsity pattern unchanged which leads to a tridiagonal pattern in the preconditioner. In a second group of examples we allow more entries in M i and S, which leads to a pentadiagonal pattern. In table 7 we compare different iteration numbers for pcg with symmetrized preconditioner M + M T. All tables 5 7 show that the prconditioner is improved by probing for a wide range of choices for probing vectors and ρ. or these examples the best results were achieved for kp = 0, k = 3, and ρ 30. Unfortunately, for the probing with approximate inverse (table 8) the probing vectors transform into a nearly sparse e T S. Hence, also for large ρ the difference e T (M S) cannot be reduced significantly; furthermore, for

22 Thomas Huckle and Alexander Kallischko Table 6. Condition numbers for domain decomposition Laplace with explicit approximate probing, for n = 10, m = 8, different anisotropy, and different sparsity pattern. sparsity kp k ρ ɛ cond(smρ 1 ) tri tri tri tri tri tri tri tri penta penta penta Table 7. Iteration count for domain decomposition Laplace (ɛ = 1) with explicit approximate probing for kp = 0, k = 3, ρ = 0 or ρ = 30. m #it, ρ = #it, ρ = large ρ the robenius norm of the matrix approximation SM I will get large, and for large ρ the preconditioner gets worse. Nevertheless, we can observe a slight improvement of the condition number also for approximate inverse preconditioner by choosing k, kp, and especially ρ very carefully. The next three Table 8. Condition numbers for domain decomposition Laplace with approximate inverse probing, three probing vectors kp = 0, imization relative to the weighted robenius norm. m cond(s) cond(sm ρ=0 ) ρ cond(sm ρ ) tables 9, 10, and 11 display the same behaviour for the explicit factorized approximations and different symmetrizations. This shows that e.g. the probing vectors to kp = 1 and kp = are not so efficient for this problem. Nevertheless, probing and MSPAI symmetrization leads to improved condition numbers and faster convergence also in the factorized form.

23 robenius Norm Minimization and Probing 3 Table 9. Condition numbers for domain decomposition Laplace with explicit approximate factorized probing,one probing vector kp = 0 and different symmetrizations. m S M ILU ρ L, Ũ Ũ T, Ũ L, Ũ L, L T L α,4 ( L, Ũ T ) Table 10. Condition numbers for domain decomposition Laplace with approximate factorized probing for m = 8, for different choices. kp k ρ L, Ũ Ũ T, Ũ L α,4(l, Ũ T ) L, Ũ L, L T L α,4 ( L, Ũ T ) Table 11. Condition numbers for domain decomposition Laplace with explicit approximate factorized probing with optimal symmetrization. m ρ cond As a last example we consider the approximation of the Schur complement described by (6) in section.7. ollowing table 1 the condition number is strongly deteriorated in the case ρ = 0, when we do not include probing. Only the choice kp = 0 and large ρ 100 leads to a reduction of the condition number. Hence, this approach is only applicable when combined with probing.

24 4 Thomas Huckle and Alexander Kallischko Table 1. Condition numbers (cond(s D M D )) for Schur complement preconditioning with probing vectors based on (6) from section.7, cond(s D ) = ρ kp = 0, k = kp =, k = MSPAI Probing for Stokes Problems We consider the four Stokes examples from IISS [18] by Silvester, Elman and Ramage [11]. The stabilized matrices are of the form ( Ast Bst ) Bst T βc with Schur complement S s := βc Bst Ast 1 Bst T ; we also consider the unstabilized problems ( A ) B B T 0 with Schur complement S u = B A 1 B T = B (A\B T ). In a first step we again approximate A 1 by a sparse matrix with the same pattern as A; then by MSPAI probing with this pattern we get the preconditioner for the Schur complement. In the unfactorized case we consider the trivial symmetrization via M. or factored preconditioners we apply also the symmetrization L α,4, possibly with an additional diagonal scaling D to obtain diagonal elements equal to 1 in the preconditioned system (5). Example 1: Channel domain with natural boundary on a grid, Q1- P0, stabilization parameter β = 1/4, and uniform pattern. The condition number of S s is Example : low over backward facing step with natural outflow boundary, Q1-P0, β = 1/4, cond(s s ) = Example 3: Lid driven cavity, regularized on stretched grid, Q1-P0, β = 1/4, exponential streamlines with singular S, but essential condition number λ max (S s )/λ (S s ) = Example 4: Colliding flow on uniform grid, Q1-P0, β = 1/4, uniform streamlines, with singular S, but essential condition number λ max (S s )/λ (S s ) = 6.9. or the stabilized and unstabilized case we display the iteration count for pcg with factorized and unfactorized preconditioner and simple symmetrizations in tables 13 resp. 14. Obviously, the MSPAI probing with different probing

25 robenius Norm Minimization and Probing 5 Table 13. Iteration count for unstabilized Stokes examples, unfactorized (with kp = 0, k = 1, ρ = 5) and factorized (with kp = 1, k =, ρ = 3). L D denotes the preconditioner L α diagonally transformed as described in (5). unfactorized factorized Problem symmetrization unprec explicit ρ = explicit ρ = ρ = L, LT L α (L, L T ) L D inverse ρ = inverse ρ = ρ = L, LT L α (L, L T ) L D Table 14. Iteration count for stabilized Stokes examples, unfactorized (with kp = 1, k = 3, ρ = 3) and factorized (with kp = 1, k = 3, ρ = 3). Problem 4 with ρ = 1.5) in the unfactorized case. unfactorized factorized Problem symmetrization unprec explicit ρ = explicit ρ L, LT L α (L, L T ) L D inverse ρ = inverse ρ L, LT L α (L, L T ) L D vectors and appropriate ρ always gives better results. ollowing [11] for the above examples one can choose a diagonal preconditioner Q that is given by the underlying inite Element method. In nearly all examples presented here this preconditioner is only a multiple of the identity matrix and therefore does not lead to a comparable improvement of the iteration count.

26 6 Thomas Huckle and Alexander Kallischko 3.3 MSPAI Probing for Dense Matrices In many applications we have to deal with dense or nearly dense matrices A. To apply a preconditioner it is helpful to derive a sparse approximation à of A. In a first step we can sparsify the dense matrix by deleting all entries outside a allowed pattern or with small entries. This gives a first approximation Ã. Here, we consider two examples. In a first setting we choose a discretization of the D Laplacian with a quite thick bandwidth that should be approximated by a sparser matrix A p such that A 1 p A has bounded condition. In figure 1 we display the given thick pattern and the sparse pattern that should be used for the approximation in à and A p. Again, we compute A p by explicit probing on à and A. In table 15 we display the resulting condition numbers. or large ρ we see that the approximation based on probing is much better than the matrix à that is obtained by reducing A to the allowed pattern by simply deleting nonzero entries. igure 1. Sparsity pattern of original matrix A (left) and sparse approximation A p (right). Table 15. Condition numbers for sparse D Laplacian approximations AA 1 p for different ρ, kp = 0, k = 1. n A Ã, ρ = In a second example we consider a dense Toeplitz matrix with first row given by a(1, 1) = π/, a(1, j) = /(π(j 1) ), j = 1,,... This matrix is related to the generating function f(x) = x π, x [0, π] (see [1]). irst we replace A by a tridiagonal matrix à by deleting all entries outside the bandwidth. In a second step by explicit probing we compute the improved tridiagonal approximation A p. or n = 1000 the condition number of A is , and the preconditioned system à 1 A has condition As probing vector we additionally allow the vector e ± := (1, 1, 1, 1,...). Table 16 shows that probing with e ± and kp = leads to a significantly improved condition number. The choices kp = 1 and kp = 0, k = 1 fail, while kp = 0, k = again gives a better preconditioner. This is caused by the fact that e ± is contained in the probing subspace related to kp = 0, k =. Note, that this example is dense, not related to a PDE problem, and allows a new probing vector.

SMOOTHING AND REGULARIZATION WITH MODIFIED SPARSE APPROXIMATE INVERSES

SMOOTHING AND REGULARIZATION WITH MODIFIED SPARSE APPROXIMATE INVERSES SMOOTHING AND REGULARIZATION WITH MODIFIED SPARSE APPROXIMATE INVERSES T. HUCKLE AND M. SEDLACEK Abstract. Sparse Approximate Inverses M which satisfy M AM I F have shown to be an attractive alternative

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices DICEA DEPARTMENT OF CIVIL, ENVIRONMENTAL AND ARCHITECTURAL ENGINEERING PhD SCHOOL CIVIL AND ENVIRONMENTAL ENGINEERING SCIENCES XXX CYCLE A robust multilevel approximate inverse preconditioner for symmetric

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners

Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, Georgia, USA

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology, USA SPPEXA Symposium TU München,

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Iterative Linear Solvers

Iterative Linear Solvers Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper presents a new fine-grained parallel algorithm for computing an incomplete LU factorization. All nonzeros

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

An advanced ILU preconditioner for the incompressible Navier-Stokes equations An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

Master Thesis Literature Study Presentation

Master Thesis Literature Study Presentation Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper presents a new fine-grained parallel algorithm for computing an incomplete LU factorization. All nonzeros

More information

7.4 The Saddle Point Stokes Problem

7.4 The Saddle Point Stokes Problem 346 CHAPTER 7. APPLIED FOURIER ANALYSIS 7.4 The Saddle Point Stokes Problem So far the matrix C has been diagonal no trouble to invert. This section jumps to a fluid flow problem that is still linear (simpler

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Indefinite and physics-based preconditioning

Indefinite and physics-based preconditioning Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

c 2015 Society for Industrial and Applied Mathematics

c 2015 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 37, No. 2, pp. C169 C193 c 2015 Society for Industrial and Applied Mathematics FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information