Bootstrap AMG. Kailai Xu. July 12, Stanford University

Similar documents
Adaptive algebraic multigrid methods in lattice computations

Computational Linear Algebra

Algebraic Multigrid as Solvers and as Preconditioner

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Kasetsart University Workshop. Multigrid methods: An introduction

New Multigrid Solver Advances in TOPS

Multigrid Methods for Linear Systems with Stochastic Entries Arising in Lattice QCD. Andreas Frommer

AMG for a Peta-scale Navier Stokes Code

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Multigrid Methods and their application in CFD

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

Numerical Programming I (for CSE)

1. Fast Iterative Solvers of SLE

hypre MG for LQFT Chris Schroeder LLNL - Physics Division

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Iterative Methods and Multigrid

Aspects of Multigrid

Solving PDEs with Multigrid Methods p.1

Multigrid absolute value preconditioning

Adaptive Multigrid for QCD. Lattice University of Regensburg

A Generalized Eigensolver Based on Smoothed Aggregation (GES-SA) for Initializing Smoothed Aggregation Multigrid (SA)

ADAPTIVE ALGEBRAIC MULTIGRID

INTRODUCTION TO MULTIGRID METHODS

Stabilization and Acceleration of Algebraic Multigrid Method

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

An Algebraic Multigrid Method for Eigenvalue Problems

University of Illinois at Urbana-Champaign. Multigrid (MG) methods are used to approximate solutions to elliptic partial differential

Computers and Mathematics with Applications

Using an Auction Algorithm in AMG based on Maximum Weighted Matching in Matrix Graphs

Adaptive Algebraic Multigrid for Lattice QCD Computations

OPERATOR-BASED INTERPOLATION FOR BOOTSTRAP ALGEBRAIC MULTIGRID

Markov Chains and Web Ranking: a Multilevel Adaptive Aggregation Method

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes

Notes on Multigrid Methods

Spectral element agglomerate AMGe

Aggregation-based algebraic multigrid

MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW

Geometric Multigrid Methods

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

9.1 Preconditioned Krylov Subspace Methods

MULTILEVEL ADAPTIVE AGGREGATION FOR MARKOV CHAINS, WITH APPLICATION TO WEB RANKING

Lecture 18 Classical Iterative Methods

A greedy strategy for coarse-grid selection

An efficient multigrid solver based on aggregation

Section 5.4 (Systems of Linear Differential Equation); 9.5 Eigenvalues and Eigenvectors, cont d

Algebraic multigrid and multilevel methods A general introduction. Outline. Algebraic methods: field of application

Chapter 7 Iterative Techniques in Matrix Algebra

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction

Introduction to Scientific Computing II Multigrid

Robust and Adaptive Multigrid Methods: comparing structured and algebraic approaches

ALGEBRAIC MULTILEVEL METHODS FOR GRAPH LAPLACIANS

Scientific Computing II

AN AGGREGATION MULTILEVEL METHOD USING SMOOTH ERROR VECTORS

Geometric Multigrid Methods for the Helmholtz equations

Scientific Computing: An Introductory Survey

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

An Introduction of Multigrid Methods for Large-Scale Computation

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator

K.S. Kang. The multigrid method for an elliptic problem on a rectangular domain with an internal conductiong structure and an inner empty space

Aggregation Algorithms for K-cycle Aggregation Multigrid for Markov Chains

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

Bootstrap AMG for Markov Chain Computations

THE spectral decomposition of matrices is used in various

Notes for CS542G (Iterative Solvers for Linear Systems)

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

An Algorithmist s Toolkit September 10, Lecture 1

multigrid, algebraic multigrid, AMG, convergence analysis, preconditioning, ag- gregation Ax = b (1.1)

arxiv: v1 [math.na] 11 Jul 2011

Iterative Methods for Solving A x = b

The Fundamentals and Advantages of Multi-grid Techniques

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

On the Development of Implicit Solvers for Time-Dependent Systems

3D Space Charge Routines: The Software Package MOEVE and FFT Compared

Contents. Preface... xi. Introduction...

Aggregation-based Adaptive Algebraic Multigrid for Sparse Linear Systems. Eran Treister

Master Thesis Literature Study Presentation

MULTI-LEVEL TECHNIQUES FOR THE SOLUTION OF THE KINETIC EQUATIONS IN CONDENSING FLOWS SIMON GLAZENBORG

EFFICIENT MULTIGRID BASED SOLVERS FOR ISOGEOMETRIC ANALYSIS

Constrained Minimization and Multigrid

Multigrid Methods for Discretized PDE Problems

Algebraic multigrid for moderate order finite elements

Multigrid and Multilevel Preconditioners for Computational Photography

Introduction to Multigrid Method

Robust multigrid methods for nonsmooth coecient elliptic linear systems

6. Multigrid & Krylov Methods. June 1, 2010

Course Notes: Week 1

A h u h = f h. 4.1 The CoarseGrid SystemandtheResidual Equation

1.10 Matrix Representation of Graphs

Multigrid and Domain Decomposition Methods for Electrostatics Problems

Math 577 Assignment 7

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Multigrid finite element methods on semi-structured triangular grids

A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS

MATH 1553-C MIDTERM EXAMINATION 3

arxiv: v2 [math.na] 16 Nov 2016

Iterative Methods and Multigrid

Multigrid solvers for equations arising in implicit MHD simulations

Numerical Methods - Numerical Linear Algebra

Algebraic Multigrid Methods for the Oseen Problem

Transcription:

Bootstrap AMG Kailai Xu Stanford University July 12, 2017

AMG Components A general AMG algorithm consists of the following components. A hierarchy of levels. A smoother. A prolongation. A restriction. Coarse grid operators. Fine grid P T P Coarse grid

AMG Components Explained Consider two layer AMG to solve Ax = b, which can be easily extended to more layers. The general structure of AMG for one iteration is x x + S(b Ax) x x + PB c P T (b Ax) relaxation correction If we perform relaxation and correction once, the iteration matrix will be (I SA)(I PB c P T A) Smoother. The smoother S is actually pre-conditioner for the relaxation step, for example, modified Jacobi ωd 1. We may perform relaxation step for several times.

AMG Components Explained, cont. Prolongation and restriction. Prolongation is also called interpolation in some literatures. We usually define restriction as the conjugate of prolongation, i.e. if the prolongation is P, the restriction will be P T. One way to view AMG is that given the smoother S, or even the sparsity pattern of P, we want to find the best P so that the convergence rate will be optimal. P should work together with S and the grid structure to yield an optimal result. Coarse grid operators. Coarse grid operators is constructed from the prolongation operator. A c = P T AP. On the coarse level, we aim at solving the projected linear system A c x c = P T b which has an approximate inverse(or pre-conditioner) B c A 1 c. Then we lift x c back to the fine space PB c x c.

Efficiency of Prolongation According to [1], we have the following remarks on the efficiency of prolongation operator. The notion of strength-of-connection used in the coarsening variables and forming interpolation can be defined from the entries of the system matrix. The eigenvectors with small absolute value eigenvalues are locally smooth in directions of such strong connections. These lowest eigenvectors of the system matrix provide a sufficiently accurate local representation of the other low eigenvectors not effectively treated by the MG relaxation scheme.

Bootstrap AMG Bootstrap: not necessarily smart, but try very hard. Bootstrap: (from wiki) a technique of loading a program into a computer by means of a few initial instructions that enable the introduction of the rest of the program from an input device. Bootstrap AMG: We will introduce two bootstrap method here.

Computing Prolongation Operator: Least Squares Given the sparsity pattern of P, for example, we may use the weighted sum of the coarse neighbors of every fine grids. v (κ) j fine grid pij coarse grid i We follow the strategy of least squares: for a given set of test vectors {v (1), v (2),..., v (k) } such that the residual is small, or more specifically, algebraically smooth, i.e. Av (i) 0, we want to minimize L(p i ) = for every row p i of P. k κ=1 ω k v (κ) i p ij v (k), j C i j

Hierarchy Structure of Multi-grid Method For the multi-grid method, it has a set of prolongation operators P0 1, P2 1,..., PL 1 L 2. We define P l = P0 1 P1 2... Pl 1 l, l = 1, 2,..., L 1. Thus the coarse grid operators are naturally defined by A l = Pl T AP l, and we have (x l, x l ) Al = (P l x l, P l x l ) A. Define T l = P T l P l. If A l ω l = λ l T l ω l, we would have Raleigh Quotient of (P l ω l ) = (P lω l, P l ω l ) A (P l ω l, P l ω l ) 2 = λ l

Computing Generalized Eigenvalue It is convenient to use ω l s as the test vectors instead of the eigenvalues of A l. It is actually generalized eigenvalue problem. Assume we already have ω l, λ l (there are k e eigenvectors and eigenvalues). Note that A l ω l = λ l T l ω l (P l l 1 )T A l 1 A l l 1 ω l = λ l (P l l 1 )T P l l 1 ω l, and this indicates P l l 1 ω l is an approximation of generalized eigenvectors on level l 1. Thus we can do a relax on the following system (A l 1 λ l 1 T l 1 )ω l 1 = 0, λ l 1 = λ l, for several loops and then update λ l 1 = (A l 1ω l 1, ω l 1 ) (T l 1 ω l 1, ω l 1 ) 2.

Computing Generalized Eigenvalue, cont. The algorithm will look like following steps. Compute the eigenvalues of A L 1 ω (κ) L directly at the coarsest level. = λ (κ) L 1 = λ(κ) L 1 T L 1ω (κ) L 1, κ = 1, 2,..., k e At level l = L 2, L 3,..., 0, relax the following linear system (A l 1 λ l 1 T l 1 )ω l 1 = 0, λ l 1 = λ l, using the scheme ω l 1 ω l 1 S l (A l 1 λ l T l 1 )ω l 1 where S l is a pre-conditioner for A l 1 λ l T l 1.

Algorithm Overview The above graph illustrate the ideas of the algorithm. At first, we do not have any information about A l so we use a random set of test vectors to compute P0 1. Then we restrict the test vectors onto coarser grid and can be used to form a new set of test vectors on the coarse grid with relaxation. For example, v (κ) i v (κ) i 1 a ii r (κ) i

Yet another Bootstrap algorithm Consider 0 a ii e i + j N i a ij e j (1) Therefore, we intend to compute prolongation weights w ij such that e i = j P i ω ij e j, where P i is the set of C nodes around i. In consideration of (1), we have e i = j P i ω ij e j = 1 a ii Thus we let a ij e j j N i j N i a ij a ii j P i a ij e j j P i a ij = j P i a ij j N i a ω ij = ij, i F, j C a ij a ii j P i j N i a ij j P i a ij a ij a ii e j

Reference Achi Brandt, J Brannick, Karsten Kahl, and Irene Livshits. Bootstrap amg. SIAM Journal on Scientific Computing, 33(2):612 632, 2011.