Sparse Matrices and Iterative Methods

Similar documents
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

9.1 Preconditioned Krylov Subspace Methods

Lecture 18 Classical Iterative Methods

9. Iterative Methods for Large Linear Systems

Course Notes: Week 1

Numerical Methods I Non-Square and Sparse Linear Systems

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Iterative Methods for Solving A x = b

Eigenvalue problems III: Advanced Numerical Methods

Numerical Solution Techniques in Mechanical and Aerospace Engineering

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

The Conjugate Gradient Method

Iterative Solvers. Lab 6. Iterative Methods

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Solving PDEs with Multigrid Methods p.1

Iterative Methods. Splitting Methods

From Stationary Methods to Krylov Subspaces

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Lecture 8: Fast Linear Solvers (Part 7)

Iterative Methods and Multigrid

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lab 1: Iterative Methods for Solving Linear Systems

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Consider the following example of a linear system:

Chapter 7 Iterative Techniques in Matrix Algebra

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Linear System of Equations

Iterative methods for Linear System

Preface to the Second Edition. Preface to the First Edition

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Notes for CS542G (Iterative Solvers for Linear Systems)

6.4 Krylov Subspaces and Conjugate Gradients

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 17: Iterative Methods and Sparse Linear Algebra

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

APPLIED NUMERICAL LINEAR ALGEBRA

Numerical Methods I: Numerical linear algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Numerical Methods in Matrix Computations

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Solution to Laplace Equation using Preconditioned Conjugate Gradient Method with Compressed Row Storage using MPI

Algebra C Numerical Linear Algebra Sample Exam Problems

Motivation: Sparse matrices and numerical PDE's

Algebraic Multigrid as Solvers and as Preconditioner

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Conjugate Gradient Method

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

Linear Solvers. Andrew Hazel

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Solving Ax = b, an overview. Program

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

4.6 Iterative Solvers for Linear Systems

Linear System of Equations

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Numerical linear algebra

Modelling and implementation of algorithms in applied mathematics using MPI

Presentation of XLIFE++

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Simple iteration procedure

Scientific Computing: Solving Linear Systems

The conjugate gradient method

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Contents. Preface... xi. Introduction...

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Numerical Methods Process Systems Engineering ITERATIVE METHODS. Numerical methods in chemical engineering Edwin Zondervan

Study on Preconditioned Conjugate Gradient Methods for Solving Large Sparse Matrix in CSR Format

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

The Lanczos and conjugate gradient algorithms

Incomplete Cholesky preconditioners that exploit the low-rank property

Numerical methods, midterm test I (2018/19 autumn, group A) Solutions

Least squares and Eigenvalues

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Introduction to Scientific Computing

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Next topics: Solving systems of linear equations

Math 671: Tensor Train decomposition methods II

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

COURSE Iterative methods for solving linear systems

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

Vector and Matrix Norms I

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING

STORAGE SCHEME FOR SPARSE MATRICES

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Transcription:

Sparse Matrices and Iterative Methods K. 1 1 Department of Mathematics 2018

Iterative Methods Consider the problem of solving Ax = b, where A is n n. Why would we use an iterative method? Avoid direct decomposition (LU, QR, Cholesky) Replace with iterated matrix multiplication LU is O(n 3 ) flops...... matrix-vector multiplication is O(n 2 )... so if we can get convergence in e.g. log(n), iteration might be faster.

Jacobi, GS, SOR Some old methods: Jacobi is easily parallelized...... but converges extremely slowly. Gauss-Seidel/SOR converge faster...... but cannot be effectively parallelized. Only Jacobi really takes advantage of sparsity.

When a matrix is sparse (many more zero entries than nonzero), then typically the number of nonzero entries is O(n), so matrix-vector multiplication becomes an O(n) operation. This makes iterative methods very attractive. It does not help direct solves as much because of the problem of fill-in, but we note that there are specialized solvers to minimize fill-in.

Krylov Subspace Methods A class of methods that converge in n iterations (in exact arithmetic). We hope that they arrive at a solution that is close enough in fewer iterations. Often these work much better than the classic methods. They are more readily parallelized, and take full advantage of sparsity.

Possibilities Sparse matrices are quite common in computation Finite differences for PDEs Finite element for PDEs Integral equations with localized kernels

Structures Nonzero elements shown in blue. Note: nz denotes number of nonzeros out of 17 million entries.

Some formats There are a few ways to store sparse matrices that are obvious. Diagonals - 1+ entry per nonzero Coordinates - 3 entries per nonzero Row- or column-oriented coordinates - 2+ entries per nonzero

Diagonal (DIA) Matrix Sparse Storage 1 0 0 0 2 0 1 3 1 0 3 4 0 0 0 5 4 6 4 0 0 6 7 0 0 0 0 0 8 9 0 0 0 8 7 0 0 2 9 0 1 0 0 2 3 0 0 5 3 2 0 4 0 0 5 6 0 0 6 5 Offsets: [ 3 1 0 3 ]

Coordinate (COO) Matrix 1 0 0 0 2 0 3 4 0 0 0 5 0 6 7 0 0 0 0 0 8 9 0 0 1 0 0 2 3 0 0 4 0 0 5 6 Sparse Storage [ 0 0 1 1 1... 5 5 5 ] [ 0 4 0 1 5... 1 4 5 ] [ 1 2 3 4 5... 4 5 6 ] rows cols vals Often used for conversions...

Compressed Sparse Row (CSR) Matrix 1 0 0 0 2 0 3 4 0 0 0 5 0 6 7 0 0 0 0 0 8 9 0 0 1 0 0 2 3 0 0 4 0 0 5 6 Sparse Storage [ 0 4 0 1 5... 1 4 5 ] [ 1 2 3 4 5... 4 5 6 ] [ 0 2 5 7 9 12 ] row cols vals offsets Also Compressed Sparse Column format, for multiplications.

Sparse Package Scipy has a subpackage called sparse that implements many of these formats. Diagonal: dia_matrix() Coordinate: coo_matrix() CSR, CSC: csr_matrix(), csc_matrix()... and others.

Using Sparse from scipy import * from scipy.sparse import csr_matrix A = csr_matrix([[-1,1,0,0],[0,-2,0,0],[0,-3,0,5],\ [0,0,1,1]]) x = array([1, 0, -1,0]) y = A.dot(x) print(y) print(a) Results in y=[-1,0,0,-1]. A prints in COO format. Note that dot(a,x) does not work. dot must be the method of the sparse object.

Example As a very simple example of the efficacy of the sparse matrix package in scipy, consider the PDE x = 1, x Ω = 0, where the region Ω is the unit square. We solve this numerically using finite differences.

Matrix There are many ways to assemble the matrix. Here is one. N = 100 Nsq = N*N h = 1.0/float(N+1) offsets = [-N,-1,0,1,N] subdiag1 = ones(nsq) subdiag1[n-1:nsq:n] = 0. supdiag1 = ones(nsq) supdiag1[0:nsq:n] = 0. A = dia_matrix(([-ones(nsq),-subdiag1,4.*ones(nsq),\ -supdiag1,-ones(nsq)],offsets),shape=(nsq,nsq))

Conversion It is easy to convert to other formats: e.g. given our DIA format matrix A, we can get other formats using: Acsr = A.tocsr() Afull = A.toarray()

Solve We can solve the systems using various methods. Given: from scipy.linalg import solve as lsolve import scipy.sparse.linalg as sp Sparse Conjugate Gradient: soln = sp.cg(a,h*h*ones(nsq)) Sparse LU: solnsp = sp.spsolve(acsr,h*h*ones(nsq)) Full LU: solnfull = lsolve(afull,h*h*ones(nsq))

Results... for a 100 100 system of interior finite difference points (i.e. 10000 10000 matrix) Sparse CG: 0.037 seconds, 2.7e-7 difference from full Sparse LU: 0.056 seconds, 1.9e-13 difference from full Full LU: 15.2 seconds, 0 difference Of course, for a more serious problem we would precondition etc.