ALGEBRAIC MULTILEVEL METHODS FOR GRAPH LAPLACIANS

Size: px
Start display at page:

Download "ALGEBRAIC MULTILEVEL METHODS FOR GRAPH LAPLACIANS"

Transcription

1 The Pennsylvania State University The Graduate School Department of Mathematics ALGEBRAIC MULTILEVEL METHODS FOR GRAPH LAPLACIANS A Dissertation in Mathematics by Yao Chen Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy August 2012

2 ii The dissertation of Yao Chen was reviewed and approved 1 by the following: James Brannick Assistant Professor of Mathematics Co-Chair of Committee, Dissertation Co-Advisor Ludmil Zikatanov Professor of Mathematics Co-Chair of Committee, Dissertation Co-Advisor Jinchao Xu Francis R. and Helen M. Pentz Professor of Science Jesse Barlow Professor of Computer Science & Engineering and Statistics Svetlana Katok Professor of Mathematics Chair of Graduate Program 1 Signatures are on file in the Graduate School.

3 iii Abstract This dissertation presents estimates of the convergence rate and computational complexity of an algebraic multilevel preconditioner of graph Laplacian problems. The aim is to construct fast and robust multilevel solvers, and this is achieved by balancing the convergence rate and the complexity of an aggregation based multilevel method. The main theoretical result consists of two parts: first, the relation between a graph partitioning and the convergence rate of the corresponding aggregation based two-level method is established; next, a multilevel method with polynomial coarse level corrections is constructed in the way that both its convergence rate and its complexity can be estimated simultaneously. Numerical tests further indicate the sharpness of the theoretical estimates. Some other interesting results, including a generalized definition of strength of connection, an optimal convergence condition of the algebraic multilevel iterations, and parallel implementations of the numerical solvers, are also addressed.

4 iv Table of Contents List of Tables vi List of Figures viii Acknowledgments Chapter 1. Introduction Chapter 2. Energy Norm Estimation Introduction Problem Description, Notation and Preliminary Results Graphs and Graph Laplacians Graph Partitioning, Coarse Spaces and Coarse Graphs Energy Norm Estimates using Commutative Diagrams Motivation Discrete Taylor Expansion Commutative Diagram Local Stability Measure and a Subgraph Partitioning Algorithm Direct Q A Norm Estimations on Unweighted Graphs Motivation The Height of a Tree Tree Covering on Graphs p-shape Regular Subgraphs Conclusion Chapter 3. Multilevel Analysis Introduction XZ Identity and V-cycles Two-level Methods

5 v Two-level Stability Estimates for Matching A Two-level Preconditioner Convergence Estimate for Matching Algebraic Multilevel Iterations Motivation Algebraic Multilevel Iterations Based on Matching Some Corollaries Numerical Results An Exact Implementation of the AMLI Method Modified AMLI Solver for Matching On Unstructured Grids Conclusion Chapter 4. Applications Introduction Anisotropic Diffusion Equations with Constant Coefficients Anisotropic Diffusion Equations with Variable Coefficients Subgraph Matching Algorithm Variable Direction of Strong Anisotropy Parallel Random Aggregation Algorithm A Maximal Independent Set Algorithm Parallel Graph Aggregation Algorithm A Construction of Coarse Spaces for Arbitrary Smoothers Preliminary and Motivation Rank One Tests and Convergence Rates of Two-level Methods Subgraph Reshaping Algorithm Conclusions References

6 vi List of Tables 2.1 Two n n grids connected by one edge Two n n grids connected by n edges Q 2 A for two chains of length N overlapping M vertices Results for PCG with standard AMLI preconditioner applied to the graph Laplacians defined on 2D grids Results for PCG with standard AMLI preconditioner applied to the graph Laplacians defined on 3D grids Results for PCG with modified AMLI preconditioner pplied to the graph Laplacians defined on 2D grids Results for PCG with modified AMLI preconditioner applied to the graph Laplacians defined on 3D grids Results for PCG with standard and modified AMLI preconditioner applied to the graph Laplacian defined on 2D unstructured grids of size n Results for PCG with standard and modified AMLI preconditioner applied to the graph Laplacian defined on 3D unstructured grids of size n The condition number estimates of the two-level preconditioned system for Problem (4.1) with a(x) defined by (4.2) for various problem sizes and choices of ɛ Condition numbers of the system with two level matching based preconditioner, for ɛ = Condition numbers of the system with two level matching based preconditioner, for a 32 2 grid and ɛ Condition numbers of the system with two level matching based preconditioner, for a 32 2 grid and various θ

7 vii 4.5 The condition number estimates of the two-level preconditioned system for Problem (4.1) with a(x) defined by (4.4) for θ = π/4 and 2 fixed grid size The condition number estimates of the two-level preconditioned system for Problem (4.1) with a(x) defined by (4.4) for θ = π/4 and 2 fixed ɛ The condition number estimates of the system preconditioned by the two-level method for Problem (4.1) with a(x) defined by (4.5) for various problem sizes The condition number estimates of the Problem (4.1) with a(x) defined by (4.5) preconditioned by the two-level method, for various problem sizes and choices of ɛ. Here, the post processing step is used to disaggregate the subgraphs

8 viii List of Figures 2.1 Matching M on a graph G (left) and the coarse graph G c (right) Assembling a tree of depth n Two trees on square grids connected by one edge An unbalanced tree/path covering on square grids connected by several edges Path covering on square grids where all paths are of the same length Two links of length n overlapped by m vertices Path covering on square grids where some paths are shared by two external edges Matching M on a subset of Ω Horizontal (left) and vertical (right) matching of subgraphs Part of the finite element grid that is considered as a graph Laplacian The direction of strong anisotropy, the finite element grid, and the local stiffness matrix of the non grid aligned problem Possible ways of grouping the vertex v Different ways of grouping the vertex v Different possibilities to group the vertex v Partitioning obtained on a 16 2 grid with θ = π/4 and ɛ = Direction of the strong anisotropy (left) and partitioning obtained on a 16 2 grid (right) Application of subgraph reshaping algorithm for 3 passes results an improvement of the two-level convergence rate

9 1 Acknowledgments I am grateful to my thesis advisors, James Brannick and Ludmil Zikatanov, for the encouragement, patience, and enthusiasm they have shown me. I am also grateful to my other committee members, Jinchao Xu and Jesse Barlow, for their insightful suggestions, enlighten commentaries on my work, and valuable discussions on a variety of pioneering topics.

10 2 Chapter 1 Introduction Widely used in solving large sparse linear systems, multigrid methods are arguably the most effective numerical solvers for linear systems. The use of the underlying differential equations and grid information in the design and development of Geometric Multigrid methods makes it possible to prove their uniform convergence and complexity [19, 26, 39]. In contrast, Algebraic Multigrid (AMG) methods, interchangeably named Algebraic Multilevel methods, do not require the detail of the underlying equations in explicit form or grid structures. Instead, an AMG method uses the entries in the coefficient matrix of the linear system to construct the multigrid hierarchy. As a result, designing an AMG method or proving its convergence is much more challenging. The basic AMG algorithm uses a setup phase to construct a nested sequence of coarse spaces that are then used in the solve phase to compute the solution of the linear system. The interpolation operators from coarse spaces to fine spaces are defined in a way that they are accurate for the algebraic smooth errors that cannot be eliminated by Jacobi or Gauss-Seidel iterations. The two main approaches to the AMG setup algorithm are classical AMG [8, 10] and smoothed aggregation AMG [38, 33, 18, 42, 29, 41, 40, 15, 16, 29, 43, 12, 17], which are distinguished by the type of coarse variables and the interpolation they use. The classical AMG methods use the idea that smooth error varies little along the strongly connected nodes [8, 9, 32, 36], therefore, the coarse variables are chosen using a coloring algorithm which is designed to find a suitable maximal independent subset of the fine variables, and then, rows of interpolation are constructed for each fine point from its neighboring coarse points. The smooth aggregation AMG methods define interpolation by applying a smoother to a tentative prolongation operator corresponding to a partitioning of unity. Both approaches have been successfully analyzed for and applied to elliptic boundary value problems.

11 3 From a theoretical point of view, unsmoothed aggregation based AMG setup algorithm are particularly attractive, since their simplicity allows for direct calculation of a variety of key quantities used in estimating their convergence and complexity properties. This idea of aggregating unknowns to coarsen a system of discretized partial differential equations dates back to work by Leont ev in 1959 [31] and since has been studied extensively [18, 29, 20, 11]. Such an AMG setup can lead to a two-level method for graph Laplacians of general graphs exhibiting uniform convergence rate. The convergence of AMG V-cycles and W-cycles has been studied before in various context [26, 39, 4, 14]. Other solve cycles, e.g., Algebraic Multilevel Iterations (AMLI) [2, 3, 44] and K-cycles [34], have been proposed as preconditioners to the Conjugate Gradient iteration to obtain nearly optimal solvers. These solve cycles imply different philosophies and procedures of designing AMG methods: either assuming V-cycles or W-cycles in the solve phase then designing a good coarsening used in the setup phase, or assuming the a simple geometric or algebraic coarsening in the setup phase then determining parameters for AMLI-cycles or K-cycles that converge fast. How to design both the setup phase and solve phase such that the combination leads to a fast convergent AMG method is an important question, and the aim of this dissertation is to study this topic in detail. The remainder of the dissertation is organized as follows. First, in Chapter 2, we estimate the stability constant, or the energy norm of the projection operator, for a multilevel hierarchy constructed by recursive graph partitioning. Then, in Chapter 3, we analyze the convergence of various solve cycles for the resulting multilevel hierarchy given before. We also derived convergence estimates for ν-fold W-cycles and AMLIcycles for general setting. Finally in Chapter 4, we present applications with numerical results obtained from a serial or parallel AMG method. We further demonstrate the performance of parallel AMG algorithms we designed and implemented.

12 4 Chapter 2 Energy Norm Estimation 2.1 Introduction Let A be a graph Laplacian of a graph G and M be a partitioning on G. Let Q be the l 2 projection to the space of piecewise constant vectors on each subgraph in the partitioning M (see 2.2 for formal definitions for A, G, M, and Q). Our goal is to estimate the energy norm (A-seminorm) of Q, or Q A. The boundedness of such norm is usually named as the stability condition, which leads to convergence rate estimates of two-level and multilevel methods. We refer the following papers to show the importance of the norm Q A. The abstract multigrid theory [4, 45, 46, 43] indicates that the convergence rate of a multilevel method crucially depends on the energy stability ( A -stability) of the orthogonal projection on the coarse space. This is equivalent to a bound of the form Q A C, for a constant C. In fact, there is no general V -cycle multilevel convergence estimate known to date that does not depend on a bound of Q A (or another projection). Our idea is to measure Q A, by considering whether Qv v for v algebraically smooth. However, evaluating Q A is not straightforward, and thus we localize the estimate on Q A using a commutative diagram involving discrete gradients and l 2 projections that holds in general algebraic setting. We mention that similar commutative diagrams for finite element spaces are considered in [1] and for agglomerated finite elements in [35]. In general, the smaller the bound on Q A, the better (smaller) convergence rate bound can be established. In this chapter, we give sharp estimates of the norm Q A for a general graph partitioning, and also explore the use of such estimate to address the following questions:

13 5 i. What kinds of partitionings give a small Q A? Is the estimate we derive sharp for such cases? ii. According to the formulation of the norm Q A, what types of partitionings give a large Q A? Is the estimate we derive sharp for such case? The first question (i) is important because a small bound on Q A can result in a fast converged two-level method, which ultimately leads to an optimal multilevel method using, e.g., V-cycles or Algebraic Multilevel Iterations (AMLI). Besides introducing theoretical analysis to answer this question, we design a black-box algorithm to find an approximate minimum of the norm Q A, where A is a graph Laplacian corresponding to a weighted graph. We note that the partitioning that yields the smallest bound of Q A norm can be difficult to construct on the graphs corresponding to unstructured grids or even structured grids, and can be considered costly for certain applications. This renders the second question (ii) useful. For example, it can be shown that a random matching (aggregation of 2 vertices) on, for example, an unstructured grid whose maximum degree is 4, results in a small Q A. However, it is observed that, a random aggressive coarsening (aggregation of large number of vertices) usually, but not always, gives a good enough Q A norm on the same grid. While an algorithm that minimizes the norm Q A might be too costly, a random partitioning algorithm can be used in practice if we introduce heuristic method aimed to avoid those partitionings that lead to the worst Q A norm. This can ultimately balance the complexity of the algorithm and expected quality of the partitioning for a large variety of problems. We finally remark that several different methods of estimating Q A are introduced in this chapter, and all can be applied to graph Laplacians A corresponding to some very general graphs, e.g., unweighted graphs and weighted graphs with positive and negative weights. We, however, limit the scope here to a small set of matrices A, e.g., those which are unweighted graph Laplacians. In Section 2.2, we formulate the problem and introduce the notation. In Section 2.3, a commutative diagram is introduced and methods of estimating Q A norm through other quantities are established. In Section 2.4, we give some direct estimate of the Q A

14 norm and discuss the sufficient and necessary conditions on a partitioning to ensure 6 a small Q A norm. We also relate this energy norm to the Poincáre constant of an unweighted graph. In Section and later in Chapter 4, we conduct numerical tests to show the sharpness of the estimates. We also suggest a black-box algorithm to generate a partitioning which results small Q A norm. 2.2 Problem Description, Notation and Preliminary Results Graphs and Graph Laplacians Consider an undirected weighted connected graph G = {V, E, W}, which is a triplet of sets of vertices, edges, and weights. Here, a weight, w k, is associated with each edge k = (i, j) E. The graph Laplacian A corresponding to a connected graph G is a matrix defined via the following bilinear form: (Au, v) = w k (u i u j )(v i v j ), u, v R V, (2.1) k=(i,j) E where V is the number of vertices in G. Our goal is to find a solution of linear systems of the form Au = f, (2.2) where A is the weighted graph Laplacian, which is the bilinear operator (2.1) in matrix form, and f R V. One of our goal is to develop graph Laplacian solvers for (2.2) where A is the finite element or finite difference discretizations of elliptic partial differential equations (PDE) with Neumann boundary condition, therefore we further assume the following: Assumption A is positive semidefinite with kernel spanned by 1 = (1,..., 1) T. For A that is corresponding to an unweighted graph, this assumption is equivalent to the statement that the graph has only one connected component. Then, under another assumption that (f, 1) = 0, there is a unique solution satisfying (u, 1) = 0 in (2.2). For a graph G, we define the discrete gradient operator B : R V R E, which for u R V is defined as: (Bu) k = u i u j, k = (i, j) E, i < j.

15 7 where (Bu) k denotes the k = (i, j)-th component of Bu and u i and u j are the components of u. If e k R E is the k-th standard Euclidean basis vector in R E and e i, e j are the standard basis vectors in R V we have B T e k = e i e j, k = (i, j) E, i < j. Remark We consider here an undirected graph and pick particular directions on the edges (i, j) E, by numbering the vertices by distinct integers and fixing that (i, j) E if and only if i < j. Our results later are however independent of this choice. Define D : R E R E as a diagonal matrix and D kk = w k, then A can be expressed as A = B T DB, and equation (2.2) can be written in variational form: Find u such that, (DBu, Bv) = (f, v), for all v R V. (2.3) Remark A discretization of elliptic partial differential equations with Dirichlet boundary conditions can result a positive definite bilinear form, named terms of the bilinear form as Ã, defined in (Ãu, v) = i u j )(v i v j ) + k=(i,j) E(u u i v i (2.4) i V = (Ãsu, v) + (Ãtu, v). Here Ãs (an unweighted graph Laplacian) and Ãt (a diagonal matrix) correspond to the bilinear forms defined respectively by the summations in (2.4). By introducing a Lagrange multiplier y (cf. [13]), the system Ãu = f can be rewritten as an augmented linear system Ãs + Ãt Ãt1 u 1 T à t 1 T = f à t 1 y 1 T. f The augmented linear system has a positive semidefinite coefficient matrix, and a solution for such system directly results to the solution of Ãu = f.

16 Graph Partitioning, Coarse Spaces and Coarse Graphs A graph partitioning of G = (V, E) is a set of subgraphs G i = (V i, E i ) such that i V i = V, V i V j =, i j. We further assume that all subgraphs are non empty and connected. The simplest non trivial example of such a graph partitioning is a matching, i.e, a collection (subset M) of edges in E such that no two edges in M are incident. For a given graph partitioning, subspaces of V = R V are defined as V c = {v V v = constant on each V i }. Note that each vertex in G corresponds to a connected subgraph G k of G and every vertex of G belongs to exactly one such component. The vectors from V c are constants on these connected subgraphs. Of importance will be the l 2 orthogonal projection on V c, which is denoted by Q, and defined as follows: (Qv) i = 1 V k j V k v j, i V k. (2.5) Another way of defining the projection Q is by first defining the prolongation operator P as 1, i V k ; (P ) ik = 0, i V k ; then letting Q = P (P T P ) 1 P T. To understand the matrix A restricted on the coarse space V c, we introduce the coarse graph. Given a graph partitioning of an unweighted graph G, the coarse graph G c = {V c, E c } is defined by assuming that all vertices in a subgraph form an equivalence class, and that V c and E c are the quotient set of V and E under this equivalence relation. That is, any vertex in V c corresponds to a subgraph in the partitioning of G, and the edge k = (i, j) exists in E c if and only if the i-th and j-th subgraphs are connected in the graph G. Figure 2.1 is an example of matching of a graph and the resulting coarse graph. Here the weights of the coarse graph are not specified. One choice is to let the

17 9 coarse graph be the Galerkin projection of the graph Laplacian A on the coarse space V, or P T AP, thus the weights of the k = (i, j)-th edge in the coarse graph is determined by (P T AP ) ij, which is an off diagonal entry of the matrix P T AP. Another possibility is to let the coarse graph G c be an unweighted graph, since the graph G is given unweighted. Both ways of defining coarse graph will be used later in the constructions of multilevel schemes. Fig. 2.1: Matching M on a graph G (left) and the coarse graph G c (right) Remark There are a variety of methods to form a partitioning of G. A common assumption on the partitioning (say in an AMG setting) is that the number of vertices in every subgraph is uniformly bounded independently of the size of the graph. Such condition allows for control of the sparsity of the coarse level matrix, A c = P T AP. For example if A is the graph Laplacian of a planar graph G, then one can prove that a partitioning into subgraphs of G gives piecewise constant interpolation P (constant on each of the subgraphs) that results in an A c which corresponds to a planar graph as well (the proof is a simple application of the Euler s formula relating the number of vertices, edges and faces of a graph). Such property is not present, in general, in constructions of P that use smoothing of the prolongation.

18 Energy Norm Estimates using Commutative Diagrams Motivation The energy norm Q A is defined by the following: Q A = Qu sup A. u: u A 0 u A Computing such supremum is equivalent to finding the largest eigenvalue of the following general eigenvalue problem, given by QAQu = λ 2 Au, which in practice requires iterative methods if the dimension of the matrix A is large. Noticing that it is relatively difficult to either get a theoretical bound on the energy norm Q A, or compute it efficiently using numerical methods, we introduce a commutative diagram that can describe the action of Q in another way, so that we can achieve the follows. The energy norm Q A for any partitioning can be bounded by a set of measures, that each can be estimated and numerically optimized locally. We emphasis that our goal is to solve graph Laplacian problems, therefore a numerical method capable of assessing and optimizing the quality of a partitioning locally is very useful within an AMG setup process. Later on, we discuss the use of this local measure to select aggregates for general graphs and show its efficacy for partial differential equations with anisotropic diffusion. Let B be the discrete gradient on a graph G and Q, as defined in (2.5), be the l 2 projection with respect to a given partitioning M, Assume that there exists a projection Π : R E R E such that ΠB = BQ. (2.6)

19 Later we construct such Π and prove the commutative property, which is shown in the following commutative diagram: R V Q B R E Π V c R V B R E Before given a very general description of Π for any positive semidefinite graph Laplacian A, let us assume that all weights in the graph corresponding to A are positive (therefore D has only positive diagonal entries). Then, an estimate of the energy seminorm of Q by the l 2 norm of D 1/2 ΠD 1/2 is as follows. ( ) Q 2 Qu, Qu A = sup A u (u, u) A = sup u = sup u = sup v=d 1/2 Bu (DBQu, BQu) (DBu, Bu) (DΠBu, ΠBu) (DBu, Bu) ( DΠD 1/2 v, ΠD 1/2 ) v (v, v) D 1/2 ΠD 1/2 2 = ρ(x), 11 where X = D 1/2 ΠD 1 Π T D 1/2, and ρ(x) is the spectral radius of X, which is a real matrix since the diagonal entries of D are positive. Thus, to estimate the energy norm of the projection Q, we need to estimate ρ(x). Since X is a symmetric and positive semidefinite matrix, by the Schwarz inequality: Xkl 2 = ( (Xe k, e l ) ) 2 (Xek, e k )(Xe l, e l ) = X kk X ll. Introducing N X (k) that describes that sparsity pattern of X as follows. N X (k) = {j X kj 0}.

20 12 We then give an estimate the spectral radius ρ(x) as follows. ρ(x) X l = max k max k E X kl max k l=1 l N X (k) 1/2 1/2 X kk X ll l N X (k) l N X (k) Xkk Xll max k ( N X (k) max l N X (k) X ll ). In summary, we are able to estimate the energetic stability (in A seminorm) of the projection given that the following ingredients are available: i. A partitioning of the vertices of the graph G into non-overlapping sets. ii. A projection Π satisfying the commutative property (2.6) with respect to the given partitioning. We later show that any entry of X can be computed using only the local information of a partitioning, which in turn suggest us to focus on constructing the commutative quantity Π such that the the sparsity pattern of X, or the entries in the resulting X, can be numerically optimized Discrete Taylor Expansion Before giving a proof of the commutative diagram (2.6) or its generalized version as in Lemma 2.3.2, we show some preliminary lemma. Considering that the action of Q, as defined in (2.5), is to average out a vector on aggregates (subgraphs), we first introduce a lemma that represent the averaging process in another way. Lemma Let G i = {V i, E i, W i } be a connected subgraph of G, and B i, D i be the discrete gradient and weight matrix such that Bi T D i B i is equal to A i, which is the graph Laplacian corresponding to G i. Choose any diagonal matrix D i, not necessarily the same as D i or have the same sparsity pattern, such that the kernel of  i = B T i D i B i is of 1 dimension. Let e k be a standard basis in R V i and 1 be a constant vector in R V i, then we name the following as descrete Taylor expansion. (u, 1) V i (u, e k ) = ((Âi + e k et k ) 1 1, Âi u), u R V i (2.7)

21 13 Proof. First we show that Âi + e k et k is invertible, by assuming (Âi + e k et k )v = 0 and proving that v can only be a zero vector. We compute the following: (Âi + e k et k )v = 0 = 1T (Âi + e k et k )v = 0 = et k v = 0. That implies that the k-th component of v is zero. Similarly, we have (Âi + e k et k )v = 0 = (I e k 1T )(Âi + e k et k )v = 0 = A i v = 0, which implies that v is multiple of the vector 1. Thus v can only be a zero vector, which concludes that the kernel of A i + e k e T k contains only the zero vector. Then we use an equality (Âi + e k et k )1 = e k and compute ((Âi + e k et k ) 1 1, Âi u) = ((Âi + e k et k ) 1 1, (Âi + e k et k e k et k )u) = (1, u) ((Âi + e k et k ) 1 1, (e k e T k u) = (1, u) ((Âi + e k et k ) 1 e k, 1)(e k, u) = (1, u) (1, 1)(e k, u). Notice here the only assumption of Lemma is that the kernel of Âi is of 1 dimensional. The weights used in D i can be different from those in D i, and the resulting  i does not need to be positive definite or semidefinite. Furthermore, some diagonal entries of D i can be zero, thus Âi can have a different sparsity pattern comparing with A i. In sum, the matrix Âi can be considered as the graph Laplacian of a subgraph of the graph corresponding to A i, with possibly different weights on the edges. This special property of this discrete Taylor expansion gives a lot of freedom to describe the some certain quantities, e.g., the inner product of a vector v and a constant vector, in various different forms. It is also a useful tool to handle the non M-matrices where the subgraphs can be indefinite. This discrete Taylor expansion is used extensively later in the construction of the commutative diagrams.

22 Commutative Diagram Here we show the existence of the projection Π in the commutative diagram ΠB = BQ, by constructing one. We first give a simple description for Π and later show a rigorous formulation using mappings between local and global vertex/edge indices. Lemma Let G = {V, E, W} be a weighted graph, and the corresponding discrete gradient is denoted by B. Given a partition of the graph {G i } such that V = i V i and define Q to be an l 2 projection to the space of vectors whose components are constants on the sets of vertices V i. There exist a family of operators Π( D) : R E R E such that Π( D)B = BQ The commutative diagram is R V Q V c R V B Here D is considered as a parameter of Π. B R E Π( D) Proof. We construct a Π( D), by computing the action of the operator BQ on any vector R E u, then define Π( D) accordingly such that Π( D)Bu = BQu. Name any edge k V an internal edge if it connects two vertices in the same subgraph, or an external edges if it connects two vertices in different subgraphs. Compute the k-th component of BQu and we have 0, k = (i, j), i V l, j V l. (internal edges) (BQu) k = 1 V l (u, 1 l ) 1 V m (u, 1 m), k = (i, j), i V l, j V m, l m. (external edges) (2.8) The symbol 1 l stands for a local constant vector, which is valued 1 on the components corresponding to the vertices in V l, and 0 elsewhere, and 1 m is defined similarly. To find a Π( D) such that (Π( D)Bu) k is equal to the right hand side of (2.8), we discuss the two cases separately. For the first case when k stands for an internal edge,

23 we define the k-th row of Π( D) be a zero row. Considering that we will use the l 2 norm of Π( D) to bound the desired Q A norm, we prefer to define the matrix Π( D) such that it has as many zero entries as possible, which can possibly make its l 2 norm easier to estimate. For the latter case when k stands for an external edge, we first write (Π( D)Bu) k = 1 V l (u, 1 l ) 1 V m (u, 1 m) = (u i u j ) + ( 1 V l (u, 1 l ) u ) ( 1 i V m (u, 1 ) m) u j. (2.9) We then analyze the three terms on the right hand side of (2.9). The first term (u i u j ) is the difference of the vector u across the edge k = (i, j), therefore it can be replaced by e T k Bu. For the remaining two terms, we recognize that both are expressed in the way as the left hand side of (2.7), therefore they can be rewritten in matrix forms, indicated by the right hand side of (2.7). Let D be diagonal matrix of the same dimension of D, but not necessarily identical to D. Let the block of the matrix D corresponding to that of the i-th subgraph G i be denoted by D l (which can also be different to D l, the weight matrix of the l-th subgraph), and assume that the choice of D ensures that B T l D l B l has a 1 dimension kernel, then the discrete Taylor expansion, as defined in (2.7), can be applied and we have the following: 15 (u, 1 l ) V l u i = ( (B T l D l B l + e i e T i ) 1 1 l, B T l D l B l u ) = ( D l B l (B T l D l B l + e i e T i ) 1 1 l, B l u) We then define C l,i = 1 V l D l B l (B T l D l B l + e i e T i ) 1 1 l. (2.10) Here C l,i depends on the choice of D l, but for simplicity, we do not express C l,i as a function of D. We will give reasonable choices of D for different problems. Then the k-th row of the matrix Π( D l ), that corresponding to an external edge k, can be defined as ( Π( D) ) k = et k + CT l,i CT m,j.

24 In sum, one way of constructing Π( D) such that Π( D)B = BQ is given row-wise by the 16 following: ( ) Π( D) k = 0 T k = (i, j) is an internal edge; e T k + CT l,i CT m,j k = (i, j) is an external edge. (2.11) The formula (2.11) gives a simplified description of the k-th row of Π( D), under some non-standard notations described as follows. i. The symbol i can stand for both local or global index of a certain vertex, and whether it is local or global depends on the context. For example. in the expression A l + e i e T i where A l is the graph Laplacian of the subgraph G l, the vertex i V l is represented in local index, and the resulting e i is a basis in R V l. ii. The summation in (2.11) is considered as the operator that can add two vectors of different dimensions, such that the resulting vector, e.g., u + v, is in R V, and (u + v) i = u i + v i where the three i indicate the same vertex, but can be in either local or global index. Subtraction and inner product are defined in the similar way. We emphasize that all these non-standard notations are not used after (2.11), and we use them only for highlighting the crucial part in constructing Π( D), which is the choices of the weight matrix D. A more rigorous formulation of Π( D) is given later. A proper choice of D helps in both deriving theoretical Q A norm estimates and developing algorithms to minimize Q A. We here discuss various typical choices of D for different types of graph Laplacian A. If the graph Laplacian, A, is an M-matrix, then the entries in the corresponding diagonal matrix D satisfy D kk > 0. The following are some typical choices of D. We try to explain the pros and cons of such choices. i. Choose D = D. This is a very natural choice since the graph Laplacian of any subgraph is also a positive semidefinite M-matrix, which has a 1 dimension kernel. When this choice of D is used to estimate the Q A norm for the pair-wise partitioning, or matching, it give a smaller bounded when a vertex and its strongest connected neighbor

25 are in an aggregate, which is similar to that which is indicated by classical AMG coarsening strategy. 17 ii. Choose D = I. This choice of D makes the formulation of Π( D) simple and let us focus more on the topology of the graph. iii. Choose D such that the any diagonal entries of it is either 1 or 0, while for any l, B T l D l B l represents a spanning tree of the subgraph G l. The motivation for this choice of D comes from the need of compute Q A norm directly (the techniques are elaborated in 2.4). We want that the matrices of the form Âl = BT l D l B l are as sparse as possible. To reduce the number of off diagonal entries in the graph Laplacian Âl, we can cut some edges in the underlying graph by assigning zero on the weights of those edges. At the same time, the graph Laplacian  l should have a 1 dimensional kernel, therefore the graph it corresponds to must be connected, We choose the spanning tree not only because it is a connected subgraph of any given graph with the least number of edges, but also because it has some recursive properties. A spanning tree of a graph can be constructed, by first partition the graph into two subgraphs, and find spanning trees for each, then connect the two spanning trees by an edge. Considering the difficulty of describing the topology of any general graph, we intuitively prefer to have some recursive formula that enable us to conduct theoretical analysis on any general graphs. choices. If A is positive semidefinite but not an M-matrix, we can make the following i. Choose D = I. Such choice works for both M-matrix and non M-matrix. It also simplies the computation, e.g., the l 2 norm of Π( D). ii. Choose D where ( D)kk = D kk. We remark here that this is not a very choice because numerical tests show that such D usually does not lead to sharp Q A norm estimates.

26 18 iii. Choose D where D kk = max(d kk, 0). This choice is based on an assumption that the partitioning M is given in a way that for any l, the graph Laplacian A l corresponding to the local graph G l is positive semidefinite. We can then prove that the subgraph Â, corresponding the collection of all edges with positive weights, is connected, therefore  has a 1 dimension kernel. iv. Choose D = D. This choice is based on an assumption that the partitioning M ensures that any local graph Laplacian  has a 1 dimension kernel. Numerical tests show that such choice of D gives sharp Q A norm estimates locally, which in turn inspires us to develop a bloack-box algorithm that does local optimizations to minimize the Q A norm. Besides the choices of D given above which is diagonal and easy to describe, there are even more feasible choices where D is not diagonal. We limit our choice of D in the two lists above, which have been shown to be helpful either in the theoretical analysis or numerical methods. We will continue the discussion on the local and global Q A norm estimation using some specific D in Section We here give a formulation of Π( D) using the standard mathematical symbols, with the help of incident matrices that map in between of local and global indices of vertices and edges. For simplicity, all Π( D) are replaced by Π in the following paragraphs. We emphasis that both Π, and C l,i defined later, are matrix functions of the weight matrix D, which we have some freedom to choose. Assume that a partitioning M is given and that n c results a non-overlapping splitting of the vertex set V in subsets V = V l. This induces l=1 a splitting of the graph in subgraphs G l = (V l, E l, W l ), l = 1,..., n c, whose vertices are V l and whose edges E l E, are those edges in E with both ends in V l. Such edges are called internal edges. We call external edges the edges connecting a vertex from V l to a vertex in V m for l m. For j V l, and 1 l n c, let l V (j) be its local index, namely, l V (j) {1,..., V l }. We then define the indicator matrix, I l,v : R V l R V whose

27 19 action on w R V l and j V l is as follows w k, if and only if j V l and l V (j) = k; (I l,v w) j = 0, elsewhere. (2.12) Therefore the transpose of this I l,v can be derived as (Il,V T w) k = w j, if and only if j V l and l V (j) = k, 0, elsewhere. Let l E (j) be the local index of the edge j E l, then we can define the indicator matrix I l,e and its transpose in a similar way. The indicator matrices are useful in describing A l, the graph Laplacian of a subgraph, when restricting the discrete gradient B and weight matrix D to the subgraph. For every vertex i, there exists a unique l such that i V l. For such l we denote A l = B T l D l B l, B l = IT l,e BI l,v, D l = IT l,e DI l,e, 1 l = IT l,v 1. Namely we have that A l : R V l R V l, is the local graph Laplacian; B l : R V l R E l is the restriction of the discrete gradient B to the subgraph (local gradient); D l : R E l R E l is the matrix of weights corresponding to the subgraph (local matrix of weights); D l is a chosen local weight matrix that is of the same dimension as D l ; and 1 l R V l is the local constant vector. Then the following equalities on the properties of the indicator matrices hold. Lemma Let I l,v be the mapping from the vertex set to its local counterparts, as defined in (2.12). Let I l,e be the corresponding mapping for the edge set (mapping the global set of all edges to a local set of edges). Then we have I T l,e BI l,v IT l,v u = IT l,e Bu, I l,e IT l,e DI l,e IT l,e = I l,e IT l,e D. Proof. First we show that the matrix I l,v I T l,v is a projection to the the range of I l,v by using the definitions. (I l,v I T l,v u) j = (I l,v T u) k, j V l and l V (j) = k; 0, elsewhere ; = u j, j V l ; 0, j V l.

28 20 Using an alternative definition of B T, for an edge k E l it follows that B T e k = e i e j Range(I l,v ), k = (i, j). Since the vertices i and j are endpoints of the edge k, both of them are in the set V l. This implies that B T I l,e w Range(I l,v ), therefore (B T I l,e w, v) = (I l,v Il,V T BT I l,e w, v), w, v. Similarly, we have (I l,e Il,E T u) k = u k, k E l ; 0, k E l. That gives (u T I l,e Il,E T DI l,e IT l,e v) kk = (I l,e IT l,e D) kk, k, since D is a diagonal matrix, and I l,e I T l,e its diagonal. is also a diagonal matrix with only 1 or 0 on Remark Lemma also holds for the other diagonal matrix D, so that the following is true. I l,e I T l,e DI l,e I T l,e = I l,e IT l,e D. We then restate equation (2.8) and (2.9) with the indicator matrices and use the Lemma Fix an i and let, as before, l be the unique subset of vertices V l containing i. Assume that the graph G l is connected and all weights in W l are positive, then the graph Laplacian Âl is positive semidefinite and the vector 1 l spans the kernel of Âl. We then set where C l,i := 1 V l DI l,e B l à 1 l 1 l, C l,i R E. (2.13) à l = Âl + et i I l,v IT l,v e i = BT l D l B l + e T i I l,v IT l,v e i. Further, k = (i, j), let l and m be the subsets of vertices such that i V l and j V m. Finally, we define a row (Π) k of Π : R E R E by 0 T, if k = (i, j) is an internal edge; (Π) k = e T (2.14) k + CT l,i CT m,j, if k = (i, j) is an external edge. With these definitions in hand, we have the following result.

29 Theorem Let A be graph Laplacian corresponding to a graph with positive weights, 21 and let Q be the l 2 -orthogonal projection on the piecewise constant coarse space corresponding to a given partitioning. commutative property For Π defined as in (2.14) we have the following ΠB = BQ. (2.15) Proof. We prove the theorem by verifying that e T k BQu = et k ΠBu for all global edge indices k and vectors u R V. If k = (i, j) is an internal edge, then (Qu) i = (Qu) j since Q is a projection onto the space of constant vectors. Therefore we have e T k BQu = (e i e j )T Qu = 0 = (Π) k Bu = (e T k Π)Bu. If the edge k = (i, j) is an external edge. and V l and V m are the subsets of vertices as in the definition of (Π) k given above in equation (2.14), we have e T k BQu = (e i e j )T Qu = (Qu) i (Qu) j = 1 V l 1T l IT l,v u 1 V m 1T mim,v T u = (u i u j ) + ( 1 V l 1T l IT l,v u u ) ( 1 i V m 1T mim,v T u u ) j. (2.16) Here V l and V m are the number of vertices in the subgraphs that contain vertex i and j. Comparing the three terms above in (2.16) and the formula that defines Π, it follows that u i u j = e T k Bu; (2.17) ( 1 V l 1T l IT l,v u u ) i = C T l,i Bu; (2.18) ( 1 V m 1T mim,v T u u ) j = C T m,j Bu. (2.19) Use now the discrete Taylor expansion, as in (2.7), and the properties of the indicator matrices, as in Lemma Noting that u i = u T e i = (u T I l,v )(I l,v T e i ), the proof of

30 22 the second relation (2.18) is as follows. 1 V l 1T l IT l,v u u i = 1 V l 1T l = 1 V l 1T l = 1 V l 1T l = 1 V l 1T l à 1 l  l Il T u à 1 l Il,V T BT I l,e Il,E T DI l,e Il,E T BI l,v IT l,v u à 1 l Il,V T BT I l,e Il,E T DI l,e Il,E T Bu à 1 l Il,V T BT I l,e Il,E T DBu = Cl,i T Bu. Analogous arguments work for the third identity (2.19) and this concludes the proof of the theorem. Let A be a graph Laplacian that is also an M-matrix, then all off-diagonal entries are non-positive and the weights of all the edges in the underlying graph are positive. We apply Theorem and the norm Q A can be estimated in the following way. Lemma Let A = B T DB is a graph Laplacian and of a connected graph and the weight matrix D satisfying D kk 0, k. Then the energy norm (A-norm) of the projection Q, as defined in (2.5), can be bounded by the following: Q 2 A ρ(x), (2.20) where X = D 1/2 ΠD 1 Π T D 1/2 and ρ( ) is the spectral radius of a square matrix. Proof. By the definition of the projection Q, and the commutative property (2.15), we have Qv 2 A = (DBQv, BQv) = (DΠBv, ΠBv) = (DΠD 1/2 D 1/2 Bv, ΠD 1/2 D 1/2 Bv) ρ(d 1/2 ΠDΠ T D 1/2 )(D 1/2 Bv, D 1/2 Bv) = ρ(d 1/2 ΠD 1 Π T D 1/2 )(D 1/2 Bv, D 1/2 Bv) = ρ(x) v 2 A. Therefore Q 2 A ρ(x). Remark We emphasize here that the matrix D in the above proof is the actual weight matrix of the graph Laplacian A, while the Π depends on another diagonal matrix D, which can be different from D. Numerical tests show that different choices of the

31 23 matrix D, which is used in the construction of the nonzero rows of X, can effect the sharpness of the estimation in (2.20). Since the norm Q A the bound on ρ(x) and the the matrix X is formulated explicitly, the next step is to estimate the largest eigenvalue of the real symmetric matrix X. Typically, the fewer assumptions are made on the graph and the partitioning, which result the matrix X, the more difficult to get a sharp bound of the largest eigenvalue of X. For any general graph on which all weights are positive, we can use the the l induced norm of X to bound its largest eigenvalue. We first give a bound on the value of an entries of X. By using the Cauchy- Schwarz inequality and consider that the symmetric positive definite matrix X defines an inner product, we get the following: ( ) 2 ( Xkl = (Xek, e l ) ) 2 (Xek, e k )(Xe l, e l ) = X kk X ll, which concludes that X kl max{x kk, X ll }. This statement is true for any combination of k and l, however, for off diagonal entries of X where k l, we can get a better bound by using an orthogonality property of the commutative quantity Π. By definition of X in Lemma 2.3.6, an off diagonal entrr of X, e.g., X kl, can be considered as the inner product of the k-th and l-th rows of the matrix D 1/2 ΠD 1/2, which can be zero if either k or l is an internal edge, which makes the corresponding row a zero row. Let k denote a nonzero row of D 1/2 ΠD 1/2, then we have (D 1/2 ΠD 1/2 ) k = e T k + D1/2 C T l,i D 1/2 D 1/2 C T m,j D 1/2 = e T k + D1/2 kk CT l,i D 1/2 D 1/2 kk CT m,j D 1/2, where D kk is the k-th diagonal entry of D. The three vectors on the right hand side of above equality are orthogonal vectors, because definition 2.13 indicates that (C l,i ) n = 0, n E l, and that implies e T k D 1/2 C l,i = 0, e T k D 1/2 C m,j = 0, C T m,j D 1 C l,i = 0. (2.21)

32 24 Therefore, for any k l, the entry X kl is bounded as follows. ( Xkl ) 2 = ( (Xek, e l ) ) 2 = ( (D 1/2 ΠD 1/2 e k, D 1/2 ΠD 1/2 e l ) ) 2 = ( (D 1/2 ΠD 1/2 e k e k, D 1/2 ΠD 1/2 e l e l ) ) 2 D 1/2 ΠD 1/2 e k e k D 1/2 ΠD 1/2 e l e l = ( (Xe k, e k ) 1 )( (Xe l, e l ) 1 ) = (X kk 1)(X ll (2.22) 1). Here we use the fact that D is a diagonal matrix therefore (D 1/2 ΠD 1/2 e k, e k ) = 1 to reach the bound on X 2 kl as indicated in (2.22), which in turn leads to X kl max{x kk, X ll } 1, k l, X kl 0. This leads to the following bound. Q 2 ( ) A ρ(x) X l max N X (k) ( max k X ll 1) + 1, (2.23) l N X (k) where N X (k) = {j : X kj 0}. We can then estimate the quantity N X (k), by counting the number of non zero entries in a row of X. Let k be the index of a non zero row of X. Then X kj is non zero only if both the k-th and the j-th edge connect to a same aggregate, which is equal to the total number of external edges that connect to the subgraph G m and G n, which are the only subgraphs connected by the edge k. Although the bound in (2.23) is not very sharp, it provides useful insights on the strategies to generate a partitioning that has a small bound. The bound in (2.23) can be considered as a product of two terms. One is the maximal value of the diagonal entries, which can be controlled by the subgraphs, Moreover, we show later that more branches a spanning tree of the subgraph can have, the smaller the resulting value X kk. The other term is the number of external edges that connect to a subgraph, for which, the length of the boundary of the subdomain can be considered as an analogy for continuous problems. Assume that we fix a coarsening factor, which is equivalent to the average number of vertices in subgraphs. Then the term N X (k) can be minimized if all subgraphs have the smallest perimeter, which implies that the partitioning should result shape regular subgraphs whose continuous analogue is a unit ball.

33 25 We remark that as shown in (2.23), the Q A norm is bounded through a chain of three inequalities, via two intermediate quantities, and the equal signs of these inequalities cannot be attained simultaneously, except for some simple graph, e.g., links (that are graphs with vertices of degrees one or two). However, such bound can be used in various cases and we discuss the following two later in detail. i. The inequality Q 2 A ρ(x) can considered as a global estimate, and a better estimate can be accomplished if, e.g., the sparsity pattern of X and the value of entries in X are simple. For matching (pair-wise partitioning) or aligned aggregation of m-vertices-links on hypercubic grids of any dimension, the bound (2.23) is sharp and that can help for constructing a fast convergent multilevel methods. We discuss sufficient and necessary conditions on the partitioning of an unweighted graph that result small Q A norm, and pay special attention on the sharpness of this global estimate so that the resulting bound can be used in the multilevel method, e.g., AMLI method, that is sensitive to the actual magnitude of the Q A norm. All discussion is in detail later in Section 2.4. ii. The inequality ρ(x) X l can be considered as a local estimate, since the l norm tests the matrix X row by row, and each row can be evaluated independently. Therefore we develop a local measure π k that represents this local property, and optimize the partitioning so that π k is small all over the graph Local Stability Measure and a Subgraph Partitioning Algorithm We first define a local measure π k then show how it is related to the Q A norm. For a fixed edge k E, define π k as the following: X kk 1, X kk > 0, π k = 0, X kk = 0. (2.24)

Spectral element agglomerate AMGe

Spectral element agglomerate AMGe Spectral element agglomerate AMGe T. Chartier 1, R. Falgout 2, V. E. Henson 2, J. E. Jones 4, T. A. Manteuffel 3, S. F. McCormick 3, J. W. Ruge 3, and P. S. Vassilevski 2 1 Department of Mathematics, Davidson

More information

Solving PDEs with Multigrid Methods p.1

Solving PDEs with Multigrid Methods p.1 Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration

More information

Adaptive algebraic multigrid methods in lattice computations

Adaptive algebraic multigrid methods in lattice computations Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann

More information

New Multigrid Solver Advances in TOPS

New Multigrid Solver Advances in TOPS New Multigrid Solver Advances in TOPS R D Falgout 1, J Brannick 2, M Brezina 2, T Manteuffel 2 and S McCormick 2 1 Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, P.O.

More information

arxiv: v1 [math.na] 11 Jul 2011

arxiv: v1 [math.na] 11 Jul 2011 Multigrid Preconditioner for Nonconforming Discretization of Elliptic Problems with Jump Coefficients arxiv:07.260v [math.na] Jul 20 Blanca Ayuso De Dios, Michael Holst 2, Yunrong Zhu 2, and Ludmil Zikatanov

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

An Algebraic Multigrid Method Based on Matching 2 in Graphs 3 UNCORRECTED PROOF

An Algebraic Multigrid Method Based on Matching 2 in Graphs 3 UNCORRECTED PROOF 1 An Algebraic Multigrid Method Based on Matching 2 in Graphs 3 James Brannic 1, Yao Chen 1, Johannes Kraus 2, and Ludmil Ziatanov 1 4 1 Department of Mathematics, The Pennsylvania State University, PA

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

PRECONDITIONING OF DISCONTINUOUS GALERKIN METHODS FOR SECOND ORDER ELLIPTIC PROBLEMS. A Dissertation VESELIN ASENOV DOBREV

PRECONDITIONING OF DISCONTINUOUS GALERKIN METHODS FOR SECOND ORDER ELLIPTIC PROBLEMS. A Dissertation VESELIN ASENOV DOBREV PRECONDITIONING OF DISCONTINUOUS GALERKIN METHODS FOR SECOND ORDER ELLIPTIC PROBLEMS A Dissertation by VESELIN ASENOV DOBREV Submitted to the Office of Graduate Studies of Texas A&M University in partial

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

AMG for a Peta-scale Navier Stokes Code

AMG for a Peta-scale Navier Stokes Code AMG for a Peta-scale Navier Stokes Code James Lottes Argonne National Laboratory October 18, 2007 The Challenge Develop an AMG iterative method to solve Poisson 2 u = f discretized on highly irregular

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New

More information

Multigrid Methods and their application in CFD

Multigrid Methods and their application in CFD Multigrid Methods and their application in CFD Michael Wurst TU München 16.06.2009 1 Multigrid Methods Definition Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

INTRODUCTION TO MULTIGRID METHODS

INTRODUCTION TO MULTIGRID METHODS INTRODUCTION TO MULTIGRID METHODS LONG CHEN 1. ALGEBRAIC EQUATION OF TWO POINT BOUNDARY VALUE PROBLEM We consider the discretization of Poisson equation in one dimension: (1) u = f, x (0, 1) u(0) = u(1)

More information

Bootstrap AMG. Kailai Xu. July 12, Stanford University

Bootstrap AMG. Kailai Xu. July 12, Stanford University Bootstrap AMG Kailai Xu Stanford University July 12, 2017 AMG Components A general AMG algorithm consists of the following components. A hierarchy of levels. A smoother. A prolongation. A restriction.

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University

More information

A greedy strategy for coarse-grid selection

A greedy strategy for coarse-grid selection A greedy strategy for coarse-grid selection S. MacLachlan Yousef Saad August 3, 2006 Abstract Efficient solution of the very large linear systems that arise in numerical modelling of real-world applications

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses P. Boyanova 1, I. Georgiev 34, S. Margenov, L. Zikatanov 5 1 Uppsala University, Box 337, 751 05 Uppsala,

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Nikola Kosturski and Svetozar Margenov Institute for Parallel Processing, Bulgarian Academy of Sciences Abstract.

More information

INTRODUCTION TO FINITE ELEMENT METHODS

INTRODUCTION TO FINITE ELEMENT METHODS INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

GRAPH PARTITIONING WITH MATRIX COEFFICIENTS FOR SYMMETRIC POSITIVE DEFINITE LINEAR SYSTEMS

GRAPH PARTITIONING WITH MATRIX COEFFICIENTS FOR SYMMETRIC POSITIVE DEFINITE LINEAR SYSTEMS GRAPH PARTITIONING WITH MATRIX COEFFICIENTS FOR SYMMETRIC POSITIVE DEFINITE LINEAR SYSTEMS EUGENE VECHARYNSKI, YOUSEF SAAD, AND MASHA SOSONKINA Abstract. Prior to the parallel solution of a large linear

More information

Variational Formulations

Variational Formulations Chapter 2 Variational Formulations In this chapter we will derive a variational (or weak) formulation of the elliptic boundary value problem (1.4). We will discuss all fundamental theoretical results that

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

A Generalized Eigensolver Based on Smoothed Aggregation (GES-SA) for Initializing Smoothed Aggregation Multigrid (SA)

A Generalized Eigensolver Based on Smoothed Aggregation (GES-SA) for Initializing Smoothed Aggregation Multigrid (SA) NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2007; 07: 6 [Version: 2002/09/8 v.02] A Generalized Eigensolver Based on Smoothed Aggregation (GES-SA) for Initializing Smoothed Aggregation

More information

AN AGGREGATION MULTILEVEL METHOD USING SMOOTH ERROR VECTORS

AN AGGREGATION MULTILEVEL METHOD USING SMOOTH ERROR VECTORS AN AGGREGATION MULTILEVEL METHOD USING SMOOTH ERROR VECTORS EDMOND CHOW Abstract. Many algebraic multilevel methods for solving linear systems assume that the slowto-converge, or algebraically smooth error

More information

1 Review: symmetric matrices, their eigenvalues and eigenvectors

1 Review: symmetric matrices, their eigenvalues and eigenvectors Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three

More information

Linear graph theory. Basic definitions of linear graphs

Linear graph theory. Basic definitions of linear graphs Linear graph theory Linear graph theory, a branch of combinatorial mathematics has proved to be a useful tool for the study of large or complex systems. Leonhard Euler wrote perhaps the first paper on

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

c 2010 Society for Industrial and Applied Mathematics

c 2010 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 32, No. 1, pp. 40 61 c 2010 Society for Industrial and Applied Mathematics SMOOTHED AGGREGATION MULTIGRID FOR MARKOV CHAINS H. DE STERCK, T. A. MANTEUFFEL, S. F. MCCORMICK, K.

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

From Completing the Squares and Orthogonal Projection to Finite Element Methods

From Completing the Squares and Orthogonal Projection to Finite Element Methods From Completing the Squares and Orthogonal Projection to Finite Element Methods Mo MU Background In scientific computing, it is important to start with an appropriate model in order to design effective

More information

Aggregation-based algebraic multigrid

Aggregation-based algebraic multigrid Aggregation-based algebraic multigrid from theory to fast solvers Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire CEMRACS, Marseille, July 18, 2012 Supported by the Belgian FNRS

More information

1. Fast Iterative Solvers of SLE

1. Fast Iterative Solvers of SLE 1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid

More information

Geometric Multigrid Methods

Geometric Multigrid Methods Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas

More information

Overlapping Schwarz preconditioners for Fekete spectral elements

Overlapping Schwarz preconditioners for Fekete spectral elements Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Algebraic multilevel preconditioners for the graph Laplacian based on matching in graphs

Algebraic multilevel preconditioners for the graph Laplacian based on matching in graphs www.oeaw.ac.at Algebraic multilevel preconditioners for the graph Laplacian based on matching in graphs J. Brannic, Y. Chen, J. Kraus, L. Ziatanov RICAM-Report 202-28 www.ricam.oeaw.ac.at ALGEBRAIC MULTILEVEL

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Algebraic multigrid for moderate order finite elements

Algebraic multigrid for moderate order finite elements Algebraic multigrid for moderate order finite elements Artem Napov and Yvan Notay Service de Métrologie Nucléaire Université Libre de Bruxelles (C.P. 165/84) 50, Av. F.D. Roosevelt, B-1050 Brussels, Belgium.

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES The Pennsylvania State University The Graduate School Department of Mathematics STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES A Dissertation in Mathematics by John T. Ethier c 008 John T. Ethier

More information

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday.

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday. MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* DOUGLAS N ARNOLD, RICHARD S FALK, and RAGNAR WINTHER Dedicated to Professor Jim Douglas, Jr on the occasion of his seventieth birthday Abstract

More information

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS LONG CHEN In this chapter we discuss iterative methods for solving the finite element discretization of semi-linear elliptic equations of the form: find

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21 U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the

More information

multigrid, algebraic multigrid, AMG, convergence analysis, preconditioning, ag- gregation Ax = b (1.1)

multigrid, algebraic multigrid, AMG, convergence analysis, preconditioning, ag- gregation Ax = b (1.1) ALGEBRAIC MULTIGRID FOR MODERATE ORDER FINITE ELEMENTS ARTEM NAPOV AND YVAN NOTAY Abstract. We investigate the use of algebraic multigrid (AMG) methods for the solution of large sparse linear systems arising

More information

Using an Auction Algorithm in AMG based on Maximum Weighted Matching in Matrix Graphs

Using an Auction Algorithm in AMG based on Maximum Weighted Matching in Matrix Graphs Using an Auction Algorithm in AMG based on Maximum Weighted Matching in Matrix Graphs Pasqua D Ambra Institute for Applied Computing (IAC) National Research Council of Italy (CNR) pasqua.dambra@cnr.it

More information

Aspects of Multigrid

Aspects of Multigrid Aspects of Multigrid Kees Oosterlee 1,2 1 Delft University of Technology, Delft. 2 CWI, Center for Mathematics and Computer Science, Amsterdam, SIAM Chapter Workshop Day, May 30th 2018 C.W.Oosterlee (CWI)

More information

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1 Scientific Computing WS 2017/2018 Lecture 18 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 18 Slide 1 Lecture 18 Slide 2 Weak formulation of homogeneous Dirichlet problem Search u H0 1 (Ω) (here,

More information

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory Tim Roughgarden & Gregory Valiant May 2, 2016 Spectral graph theory is the powerful and beautiful theory that arises from

More information

Robust solution of Poisson-like problems with aggregation-based AMG

Robust solution of Poisson-like problems with aggregation-based AMG Robust solution of Poisson-like problems with aggregation-based AMG Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire Paris, January 26, 215 Supported by the Belgian FNRS http://homepages.ulb.ac.be/

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Computers and Mathematics with Applications

Computers and Mathematics with Applications Computers and Mathematics with Applications 68 (2014) 1151 1160 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa A GPU

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS PARTITION OF UNITY FOR THE STOES PROBLEM ON NONMATCHING GRIDS CONSTANTIN BACUTA AND JINCHAO XU Abstract. We consider the Stokes Problem on a plane polygonal domain Ω R 2. We propose a finite element method

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

On the Numerical Evaluation of Fractional Sobolev Norms. Carsten Burstedde. no. 268

On the Numerical Evaluation of Fractional Sobolev Norms. Carsten Burstedde. no. 268 On the Numerical Evaluation of Fractional Sobolev Norms Carsten Burstedde no. 268 Diese Arbeit ist mit Unterstützung des von der Deutschen Forschungsgemeinschaft getragenen Sonderforschungsbereiches 611

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

Algebraic multigrid and multilevel methods A general introduction. Outline. Algebraic methods: field of application

Algebraic multigrid and multilevel methods A general introduction. Outline. Algebraic methods: field of application Algebraic multigrid and multilevel methods A general introduction Yvan Notay ynotay@ulbacbe Université Libre de Bruxelles Service de Métrologie Nucléaire May 2, 25, Leuven Supported by the Fonds National

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS 10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS 769 EXERCISES 10.5.1 Use Taylor expansion (Theorem 10.1.2) to give a proof of Theorem 10.5.3. 10.5.2 Give an alternative to Theorem 10.5.3 when F

More information

An efficient multigrid solver based on aggregation

An efficient multigrid solver based on aggregation An efficient multigrid solver based on aggregation Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire Graz, July 4, 2012 Co-worker: Artem Napov Supported by the Belgian FNRS http://homepages.ulb.ac.be/

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

Matrices and Vectors

Matrices and Vectors Matrices and Vectors James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 11, 2013 Outline 1 Matrices and Vectors 2 Vector Details 3 Matrix

More information

EXACT DE RHAM SEQUENCES OF SPACES DEFINED ON MACRO-ELEMENTS IN TWO AND THREE SPATIAL DIMENSIONS

EXACT DE RHAM SEQUENCES OF SPACES DEFINED ON MACRO-ELEMENTS IN TWO AND THREE SPATIAL DIMENSIONS EXACT DE RHAM SEQUENCES OF SPACES DEFINED ON MACRO-ELEMENTS IN TWO AND THREE SPATIAL DIMENSIONS JOSEPH E. PASCIAK AND PANAYOT S. VASSILEVSKI Abstract. This paper proposes new finite element spaces that

More information