Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Similar documents
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Numerical Methods I Non-Square and Sparse Linear Systems

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Contents. Preface... xi. Introduction...

Scientific Computing

9.1 Preconditioned Krylov Subspace Methods

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

The Conjugate Gradient Method

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Numerical Methods in Matrix Computations

APPLIED NUMERICAL LINEAR ALGEBRA

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Motivation: Sparse matrices and numerical PDE's

Solving linear systems (6 lectures)

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

AM205: Assignment 2. i=1

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Numerical Methods - Numerical Linear Algebra

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Lecture 18 Classical Iterative Methods

Preface to the Second Edition. Preface to the First Edition

MAA507, Power method, QR-method and sparse matrix representation.

Course Notes: Week 1

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

LINEAR SYSTEMS (11) Intensive Computation

9. Iterative Methods for Large Linear Systems

Numerical Linear Algebra

6.4 Krylov Subspaces and Conjugate Gradients

Iterative Methods for Solving A x = b

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Iterative Methods and Multigrid

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Numerical Methods I: Numerical linear algebra

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Direct solution methods for sparse matrices. p. 1/49

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Scientific Computing: An Introductory Survey

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Eigenvalue problems III: Advanced Numerical Methods

Computational Methods. Systems of Linear Equations

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

JACOBI S ITERATION METHOD

Iterative Methods. Splitting Methods

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Numerical Mathematics

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Solving Ax = b, an overview. Program

1 Extrapolation: A Hint of Things to Come

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Scientific Computing: Solving Linear Systems

Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners

Solving PDEs with Multigrid Methods p.1

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Algebraic Multigrid as Solvers and as Preconditioner

1.Chapter Objectives

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices

9. Numerical linear algebra background

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Computational Linear Algebra

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Sparse Matrices and Iterative Methods

Chapter 7 Iterative Techniques in Matrix Algebra

Introduction to Mathematical Programming

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

Matrix Multiplication Chapter IV Special Linear Systems

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

lecture 2 and 3: algorithms for linear algebra

Iterative methods for Linear System

CS 219: Sparse matrix algorithms: Homework 3

Preconditioners for the incompressible Navier Stokes equations

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

AIMS Exercise Set # 1

Applied Linear Algebra

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

Lab 1: Iterative Methods for Solving Linear Systems

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

9. Numerical linear algebra background

Numerical Methods Process Systems Engineering ITERATIVE METHODS. Numerical methods in chemical engineering Edwin Zondervan

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Matrix decompositions

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

Transcription:

Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008 1

Solving Sparse Linear Systems Assumed background from Chapter 5: Gauss elimination and the LU decomposition (Partial) Pivoting for stability Cholesky decomposition of a symmetric positive definite matrix Forward and backward substitution for solving lower or upper triangular systems 2

How to store a sparse matrix Sparse direct methods The difficulty with fill-in Some strategies to reduce fill-in The plan: Iterative methods for solving linear systems: Basic (slow) iterations: Jacobi, Gauss-Seidel, SOR. Krylov subspace methods Preconditioning (where direct meets iterative) A special purpose method: Multigrid Note: There are sparse direct and Krylov subspace methods for eigenvalue problems, but we will just mention them in passing. 3

Contrasting direct and iterative methods Direct: Usually the method of choice for 2-d PDE problems. Iterative: Usually the method of choice for 3-d PDE problems. Direct: Except for round-off errors, we produce an exact solution to our problem. Iterative: We produce an approximate solution with a given tolerance. Direct: We usually require more storage (sometimes much more) than the original Iterative: We only require a few extra vectors of storage. Direct: The elements of the matrix A are modified by the algorithm. Iterative: We don t need to store the elements explicitly; we only need a function that can form Ax for any given vector x. Direct: If a new b is given to us later, solving the new system is fast. Iterative: It is not as easy to solve the new system. 4

Direct: The technology is well developed and good software is standard. Iterative: Some software exists, but it is incomplete and rapidly evolving. Reference for direct methods: Chapter 27. Reference for iterative methods: Chapter 28. 5

Storing a sparse matrix There are many possible schemes. Matlab chooses a typical one: store the indices and values of the nonzero elements in column order, so 2 0 0 0 0 5 7 0 1 0 6 0 0 0 0 8 is stored as (1, 1) 2 (3, 1) 1 (2, 2) 5 (2, 3) 7 (3, 3) 6 (4, 4) 8 Therefore, storing a sparse matrix takes about 3nz storage locations, where nz is the number of nonzeros. 6

Sparse direct methods Recommended references: In Matlab, type demo, and then choose Matrices, Sparse Matrices. Also In Matlab, type demo, and then choose Matrices, Orderings and Separators for a Finite Element Matrix (but don t worry about the trees that it produces). 7

We want to solve The problem: Ax = b. For simplicity, we ll assume for now that A symmetric. A positive definite. For a PDE, this is ok if the underlying operator A is self-adjoint and coercive. 8

The difficulty with fill-in: A motivating example Suppose we want to solve a system involving an n n arrowhead matrix: x 1 b 1 0 0 0 0 x 2 b 2 A = 0 0 0 0 x 3 0 0 0 0 x 4 = b 3 b 4 0 0 0 0 x 5 b 5 0 0 0 0 x 6 b 6 where denotes a nonzero value (we don t care what it is) and 0 denotes a zero. The number of nonzeros is 3n 2. Suppose we use Gauss-elimination (or the LU factorization, or the Cholesky factorization all of them have the same trouble). Then in the first step, we add some multiple of the first row to every other row. Disaster! The matrix is now fully dense with n 2 nonzeros! 9

A fix Let s rewrite our problem by moving the first column and the first row to the end: 0 0 0 0 x 2 b 2 0 0 0 0 x 3 b 3 A = 0 0 0 0 x 4 0 0 0 0 x 5 = b 4 b 5 0 0 0 0 x 6 b 6 x 1 b 1 Now when we use Gauss-elimination (or the LU factorization, or the Cholesky factorization) no new nonzeros are produced in A. Reordering the variables and equations is a powerful tool for maintaining sparsity during factorization. 10

Strategies for overcoming fill-in We reorder the rows and columns with a permutation matrix P and solve instead of Ax = b. PAP T (Px) = Pb Note that à = PAPT is still symmetric and positive definite, so we can use the Cholesky factorization à = LDLT where L is lower triangular with ones on its diagonal and D is diagonal. 11

How to reorder Finding the optimal reordering is generally too expensive: it is in general an NP hard problem. Therefore, we rely on heuristics that give us an inexpensive algorithm to find a reordering. As a consequence, usually the heuristics do well, but sometimes they produce a very bad reordering. 12

Important insights Insight 1: If A is a band matrix, i.e., tridiagonal, pentadiagonal,..., then there is never any fill outside the band. 13

Insight 2: Also, there is never any fill outside the profile of a matrix, where the profile stretches from the first nonzero in each column to the main diagonal element in the column, and from the first nonzero in each row to the main diagonal element. Example: Zeros within the profile marked with 0 0 0 0 0 0 0 0 A = 0 0 0 0 0 0 0 0, profile(a) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 So some methods try to produce a reordered matrix with a small band or a small profile., 14

Insight 3: The sparsity of a matrix can be encoded in a graph. For example, a symmetric matrix 0 0 0 0 0 0 0 A = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 has upper-triangular nonzero off-diagonal elements a 13, a 25, a 26,a 35 and corresponds to a graph with 6 nodes, one per row/column, and edges connecting nodes (1, 3), (2, 5), (2, 6), and (3, 5). Unquiz: Draw the graph corresponding to the finite difference matrix for Laplace s equation on a 5 x 5 grid. [] 15

Some reordering strategies 16

Strategy 1: Cuthill-McKee One of the oldest is Cuthill-McKee, which uses the graph to order the rows and columns: Find a starting node with minimum degree (degree = number of neighbors). Until all nodes are ordered, For each node that was ordered in the previous step, order all of the unordered nodes that are connected to it, in order of their degree. Reverse Cuthill-McKee (doing a final reordering from last to first) often works even better. 17

0 spy(s) 0 Cholesky decomposition of S 5 5 10 10 15 15 20 20 25 0 5 10 15 20 25 nz = 105 25 0 5 10 15 20 25 nz = 129 18

0 S(p,p) after Cuthill McKee ordering 0 chol(s(p,p)) after Cuthill McKee ordering 5 5 10 10 15 15 20 20 25 0 5 10 15 20 25 nz = 105 25 0 5 10 15 20 25 nz = 115 19

0 S(r,r) after minimum degree ordering 0 chol(s(r,r)) after minimum degree ordering 5 5 10 10 15 15 20 20 25 0 5 10 15 20 25 nz = 105 25 0 5 10 15 20 25 nz = 116 Until all nodes are ordered, Strategy 2: Minimum Degree Choose a node that has the smallest degree, and order that node next, removing it from the graph. (If there is a tie, choose any of the candidates.) 20

Strategy 3: Nested Dissection Try to break the graph into 2 pieces plus a separator, with approximately the same number of nodes in the two pieces, no edges between the two pieces, a small number of nodes in the separator. Do this recursively until all pieces have a small number of nodes. Then order the nodes piece by piece. Finally, order the nodes in the separators. 21

0 S(r,r) after nested dissection ordering chol(s(r,r)) after nested dissection ordering 0 5 5 10 10 15 15 20 20 25 0 5 10 15 20 25 nz = 105 25 0 5 10 15 20 25 nz = 115 22

A comparison of the orderings Demo: spar50.m 23

Summary of sparse direct methods The idea is to reorder rows and columns of the matrix in order to reduce fill-in during the factorization. We have given examples of strategies useful for symmetric positive definite matrices. For nonsymmetric matrices, and for symmetric indefinite matrices, there is an additional critical complication: we need to reorder for stability as well as sparsity preservation. For nonsymmetric matrices, strategies are similar, but the row permutation is allowed to be different from the column permutation, since there is no symmetry to preserve. The most recent reordering strategies are based on partitioning the matrix by spectral partitioning, using the elements of an eigenvector of a matrix related to the graph. These methods are becoming more popular. 24

Sparse direct methods for eigenproblems The standard algorithm for finding eigenvalues and eigenvectors: the QR algorithm. In Matlab, [V,D] = eig(a) puts the eigenvectors of A in the columns of V and the eigenvalues along the main diagonal of D. Strategy for sparse matrices: Use a reordering strategy to reduce the bandwidth of the matrix. We replace A by PAP T, which has the same eigenvalues but eigenvectors equal to P times the eigenvectors of A. Use the QR algorithm to find the eigenvalues of the resulting banded matrix. Actually, Matlab s eig will not even attempt to find the eigenvectors of a matrix stored in sparse format, because the matrix V is generally full, but it will compute the eigenvalues. 25

If we want (a few) eigenvalues and eigenvectors of a sparse matrix, we use Matlab s eigs function, which uses a Krylov subspace method based on the Arnoldi basis, to be discussed later. 26