Chapter 7 Iterative Techniques in Matrix Algebra

Size: px
Start display at page:

Download "Chapter 7 Iterative Techniques in Matrix Algebra"

Transcription

1 Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis

2 Vector Norms Definition A vector norm on R n is a function,, from R n into R with the properties: (i) x 0 for all x R n (ii) x = 0 if and only if x = 0 (iii) αx = α x for all α R and x R n (iv) x + y x + y for all x, y R n Definition The Euclidean norm l 2 and the infinity norm l for the vector x = (x 1, x 2,..., x n ) t are defined by { n x 2 = i=1 x 2 i } 1/2 and x = max 1 i n x i

3 Cauchy-Bunyakovsky-Schwarz Inequality for Sums Theorem For each x = (x 1, x 2,..., x n ) t and y = (y 1, y 2,..., y n ) t in R n, x t y = { n n x i y i x 2 i i=1 i=1 i=1 } 1/2 { n } 1/2 yi 2 = x 2 y 2

4 Distances Definition The distance between two vectors x = (x 1,..., x n ) t and y = (y 1,..., y n ) t is the norm of the difference of the vectors. The l 2 and l distances are { n } 1/2 x y 2 = (x i y i ) 2 i=1 x y = max 1 i n x i y i

5 Convergence Definition A sequence {x (k) k=1 of vectors in Rn is said to converge to x with respect to the norm if, given any ε > 0, there exists an integer N(ε) such that x (k) x < ε, for all k N(ε) Theorem The sequence of vectors {x (k) } converges to x in R n with respect to if and only if lim k x (k) i = x i. Theorem For each x R n, x x 2 n x

6 Matrix Norms Definition A matrix norm on n n matrices is a real-valued function satisfying (i) A 0 (ii) A = 0, if and only if A = 0 (iii) αa = α A (iv) A + B A + B (v) AB A B

7 Natural Matrix Norms Theorem If is a vector norm, the natural (or induced) matrix norm is given by Corollary A = max x =1 Ax For any vector z 0, matrix A, and natural norm, Theorem Az A z If A = (a ij ) is an n n matrix, then A = max 1 i n j=1 n a ij

8 Eigenvalues and Eigenvectors Definition The characteristic polynomial of a square matrix A is p(λ) = det(a λi) Definition The zeros λ of the characteristic polynomial are eigenvalues of A, x 0 satisfying (A λi)x = 0 is a corresponding eigenvector. Definition The spectral radius ρ(a) of a matrix A is ρ(a) = max λ, Theorem If A is an n n matrix, then (i) A 2 = [ρ(a t A)] 1/2 for eigenvalues λ of A (ii) ρ(a) A, for any natural norm

9 Convergent Matrices Definition An n n matrix A is convergent if lim k (Ak ) ij = 0, for each i = 1, 2,..., n and j = 1, 2,..., n Theorem The following statements are equivalent. (i) A is a convergent matrix (ii) lim n A n = 0, for some natural norm (iii) lim n A n = 0, for all natural norms (iv) ρ(a) < 1 (v) lim n A n x = 0, for every x

10 Iterative Methods for Linear Systems Direct methods for solving Ax = b, e.g. Gaussian elimination, compute an exact solution after a finite number of steps (in exact arithmetic) Iterative algorithms produce a sequence of approximations x (1), x (2),... which hopefully converges to the solution, and may require less memory than direct methods may be faster than direct methods may handle special structures (such as sparsity) in a simpler way Residual r = b Ax Direct Iterative Iteration

11 Two Classes of Iterative Methods Stationary methods (or classical iterative methods) finds a splitting A = M K and iterates x (k) = M 1 (Kx (k 1) + b) = T x (k 1) + c Jacobi, Gauss-Seidel, Successive Overrelaxation (SOR) Krylov subspace methods use only multiplication by A (and possibly by A T ) and find solutions in the Krylov subspace {b, Ab, A 2 b,..., A k 1 b} Conjugate Gradient (CG), Generalized Minimal Residual (GMRES), BiConjugate Gradient (BiCG), etc

12 Jacobi s Method An iterative technique to solve Ax = b starts with an initial approximation x (0) and generates a sequence of vectors {x (k) } k=0 that converges to x. Jacobi s Method Solve for x i in the the ith equation of Ax = b: x i = n j=1 j i ( a ) ijx j + b i, a ii a ii for i = 1, 2,..., n This leads to the iteration x (k) i = 1 n ( a ii j=1 j i a ij x (k 1) j ) + b i, for i = 1, 2,..., n

13 Matrix form of Jacobi s Method Convert Ax = b into an equivalent system x = T x + c, select initial vector x (0) and iterate x (k) = T x (k 1) + c For Jacobi s method, split A into diagonal and off-diagonal parts: a 11 a 12 a 1n a a 12 a 1n a 21 a 22 a 2n..... = 0 a a an 1,n a n1 a n2 a nn 0 0 a nn a n1 a n,n }{{}}{{}}{{}}{{} A D L U This transforms Ax = (D L U)x = b into Dx = (L + U)x + b, and if D 1 exists, this leads to the Jacobi iteration: x (k) = D 1 (L + U)x (k 1) + D 1 b = T j x (k 1) + c j where T j = D 1 (L + U) and c j = D 1 b

14 The Gauss-Seidel Method The Gauss-Seidel Method Improve Jacobi s method by, for i > 1, using the already updated components x (k) 1,..., x(k) i 1 when computing x(k) i : x (k) i = 1 a ii i 1 j=1 (a ij x (k) j ) n (a ij x (k 1) ) + b i j=i+1 In matrix form, the method can be written (D L)x (k) = Ux (k 1) + b and if (D L) 1 exists, this leads to the Gauss-Seidel iteration x (k) = (D L) 1 Ux (k 1) + (D L) 1 b = T g x (k 1) + c g where T g = (D L) 1 U and c g = (D L) 1 b j

15 General Iteration Methods Lemma If the spectral radius satisfies ρ(t ) < 1, then (I T ) 1 exists, and Theorem (I T ) 1 = I + T + T 2 + = For any x (0) R n, the sequence x (k) = T x (k 1) + c j=0 converges to the unique solution of x = T x + c if and only if ρ(t ) < 1. T j

16 General Iteration Methods Corollary If T < 1 for any natural matrix norm, then x (k) = T x (k 1) + c converges for any x (0) R n to a vector x R n s.t. x = T x + c. The following error estimates hold: 1 x x (k) T k x (0) x 2 x x (k) T k 1 T x(1) x (0) Theorem A strictly diagonally dominant = Jacobi and Gauss-Seidel converges for any x (0). Theorem (Stein-Rosenberg) If a ii > 0 for all i and a ij < 0 for i j, then one and only one of the following holds: (i) 0 ρ(t g ) < ρ(t j ) < 1 (ii) 1 < ρ(t j ) < ρ(t g ) (iii) ρ(t j ) = ρ(t g ) = 0 (iv) ρ(t j ) = ρ(t g ) = 1

17 The Residual Vector Definition The residual vector for x R n with respect to the linear system Ax = b is r = b A x. Consider the approximate solution vector in Gauss-Seidel: x (k) i with residual vector = (x (k) 1, x(k) 2,..., x(k) r (k) i The Gauss-Seidel method: x (k) i = 1 i 1 b i a ii can then be written as i 1, x(k 1) i = (r (k) 1i, r(k) 2i,..., r(k) ni )t x (k) i j=1 a ij x (k) j = x (k 1) i n j=i+1 + r(k) ii a ii,..., x (k 1) n ) t a ij x (k 1) j

18 Successive Over-Relaxation The relaxation methods uses an iteration of the form x (k) i = x (k 1) i + ω r(k) ii a ii for some positive ω. With ω > 1, they can accelerate the convergence of the Gauss-Seidel method, and are called successive over-relaxation (SOR) methods. Write the SOR method as x (k) i = (1 ω)x (k 1) i + ω i 1 b i a ij x (k) j a ii j=1 which can be written in the matrix form x (k) = T ω x (k 1) + c ω where T ω = (D ωl) 1 [(1 ω)d + ωu] and c ω = ω(d ωl) 1 b. n j=i+1 a ij x (k 1) j

19 Convergence of the SOR Method Theorem (Kahan) If a ii 0 for all i, then ρ(t ω ) ω 1 and the SOR method can converge only if 0 < ω < 2. Theorem (Ostrowski-Reich) If A is PD and 0 < ω < 2, then SOR converges for any x (0). Theorem If A is PD and tridiagonal, then ρ(t g ) = [ρ(t j )] 2 < 1, and the optimal ω for SOR is ω = [ρ(t j )] 2 which gives ρ(t ω ) = ω 1.

20 Error Bounds Theorem Suppose Ax = b, A is nonsingular, x x, and r = b A x. Then for any natural norm, and if x, b 0, Definition x x r A 1 x x x A A 1 r b The condition number of nonsingular matrix A in the norm is K(A) = A A 1 In terms of K(A), the error bounds can be written: x x K(A) r A, x x K(A) r x b

21 Iterative Refinement Algorithm: Iterative Refinement Solve Ax (1) = b for k = 1, 2, 3,... r (k) = b Ax (k) Solve Ay (k) = r (k) x (k+1) = x (k) + y (k) residual compute accurately! solve for correction improve solution Allows for errors in the solution of the linear systems, provided the residual r is computed accurately

22 Errors in both matrix and right-hand side Theorem Suppose A is nonsingular and The solution x to δa < 1 A 1 (A + δa) x = b + δb approximates the solution x of Ax = b with the error estimate ( x x K(A) A δb x A K(A) δa b + δa ) A

23 Inner products Definition The inner product for n-dimensional vectors x, y is x, y = x t y Theorem For any vectors x, y, z and real number α: (a) x, y = y, x (b) αx, y = x, αy = α x, y (c) x + z, y = x, y + z, y (d) x, x 0 (e) x, x = 0 x = 0

24 Krylov Subspace Algorithms Create a sequence of Krylov subspaces for Ax = b: K k = {b, Ab,..., A k 1 b} and find approximate solutions x k in K k Only matrix-vector products involved For SPD matrices, the most popular algorithm is the Conjugate Gradients method [Hestenes/Stiefel, 1952] Finds the best solution x k K k in the norm x A = x t Ax Only requires storage of 4 vectors (not all the k vectors in K k ) Remarkably simple and excellent convergence properties Originally invented as a direct algorithm! (converges after n steps in exact arithmetic)

25 The Conjugate Gradients Method Algorithm: Conjugate Gradients Method x 0 = 0, r 0 = b, p 0 = r 0 for k = 1, 2, 3,... α k = (rk 1 t r k 1)/(p t k 1 Ap k 1) x k = x k 1 + α k p k 1 r k = r k 1 α k Ap k 1 β k = (rk t r k)/(rk 1 t r k 1) p k = r k + β k p k 1 step length approximate solution residual improvement this step search direction Only one matrix-vector product Ap k 1 per iteration Operation count O(n) (excluding the matrix-vector product)

26 Properties of Conjugate Gradients Vectors Theorem The spaces spanned by the solutions, the search directions, and the residuals are all equal to the Krylov subspaces: K k = span ({x 1, x 2,..., x k }) = span ({p 0, p 1,..., p k 1 }) ({ }) = span ({r 0, r 1,..., r k 1 }) = span b, Ab,..., A k 1 b The residuals are orthogonal: r t k r j = 0 (j < k) The search directions are A-conjugate: p t k Ap j = 0 (j < k)

27 Optimality of Conjugate Gradients Theorem The errors e k = x x k are minimized in the A-norm Proof. For any other point x = x k x K k the error is e 2 A = (e k + x) t A(e k + x) = e t k Ae k + ( x) t A( x) + 2e t k A( x) But e t k A( x) = rt k ( x) = 0, since r k is orthogonal to K k, so x = 0 minimizes e A Theorem Monotonic: e k A e k 1 A, and e k = 0 in k m steps Proof. Follows from K k K k+1, and that K k R m unless converged

28 Optimization in CG CG can be interpreted as a minimization algorithm We know it minimizes e A, but this cannot be evaluated CG also minimizes the quadratic function ϕ(x) = 1 2 xt Ax x t b: e k 2 A = e t k Ae k = (x x k ) t A(x x k ) = x t k Ax k 2x t k Ax + x t Ax = x t k Ax k 2x t k + xt b = 2ϕ(x k ) + constant At each step α k is chosen to minimize x k = x k 1 + α k p k 1 The conjugated search directions p k give minimization over all of K k

29 Optimization by Conjugate Gradients We know that solving Ax = b is equivalent to minimizing the quadratic function ϕ(x) = 1 2 xt Ax x t b The minimization can be done by line searches, where ϕ(x k ) is minimized along a search direction p k Theorem The α k+1 that minimizes ϕ(x k + α k+1 p k ) is α k+1 = pt k r k p t k Ap k with the residual r k = b Ax k The residual is also minus the gradient of ϕ(x k ): ϕ(x k ) = Ax k b = r k

30 The Method of Steepest Descent Very simple approach: Set search direction p k to the negative gradient r k Corresponds to moving in the direction ϕ(x) changes the most Algorithm: Steepest Descent x 0 = 0, r 0 = b for k = 1, 2, 3,... α k = (rk 1 t r k 1)/(rk 1 t Ar k 1) x k = x k 1 α k r k 1 r k = r k 1 + α k Ar k 1 step length approximate solution residual Poor convergence, tends to move along previous search directions

31 The Method of Conjugate Directions The optimization can be improved by better search directions Let the search direction be A-conjugate, or p t i Ap k = 0 Then the algorithm will converge in at most n steps, since the initial error can be decomposed along the p s: n 1 e 0 = δ k p k, with δ k = pt k Ae 0 p t k Ap k k=0 But this is exactly the α we choose at step k: α k+1 = pt k r k p t k Ap k = pt k Ae k p t k Ap k = pt k Ae 0 p t k Ap k since the error e k is the initial e 0 plus a combination of p 0,..., p k 1, which are all A-conjugate to p k. Each component δ k is then subtracted out at step k, and the method converges after n steps.

32 Choosing A-conjugate Search Directions One method to choose p k which is A-conjugate to previous search vectors is by Gram-Schmidt: k 1 p k = p 0 k β kj p j, with β kj = p0 t k Apj p t j Ap j j=0 The initial p 0 k vectors should be linearly independent, for example column k + 1 of identity matrix Drawback: Must store all previous search vectors p k Conjugate Gradients is simply Conjugate Directions with a particular initial vector in Gram-Schmidt: p 0 k = r k This gives orthogonal residuals r t k r j = 0 for j k, and β kj = 0 for k > j + 1

33 Preconditioners for Linear Systems Main idea: Instead of solving Ax = b solve, using a nonsingular n n preconditioner M, which has the same solution x M 1 Ax = M 1 b Convergence properties based on M 1 A instead of A Trade-off between the cost of applying M 1 and the improvement of the convergence properties. Extreme cases: M = A, perfect conditioning of M 1 A = I, but expensive M 1 M = I, do nothing M 1 = I, but no improvement of M 1 A = A

34 Preconditioned Conjugate Gradients To keep symmetry, solve (C 1 AC )C x = C 1 b with CC = M Can be written in terms of M 1 only, without reference to C: Algorithm: Preconditioned Conjugate Gradients Method x 0 = 0, r 0 = b, p 0 = M 1 r 0, z 0 = p 0 for k = 1, 2, 3,... α k = (rk 1 T z k 1)/(p T k 1 Ap k 1) x k = x k 1 + α k p k 1 r k = r k 1 α k Ap k 1 z k = M 1 r k β k = (rk T z k)/(rk 1 T z k 1) p k = z k + β k p k 1 step length approximate solution residual preconditioning improvement this step search direction

35 Commonly Used Preconditioners A preconditioner should approximately solve the problem Ax = b Jacobi preconditioning - M = diag(a), very simple and cheap, might improve certain problems but usually insufficient Block-Jacobi preconditioning - Use block-diagonal instead of diagonal. Another variant is using several diagonals (e.g. tridiagonal) Classical iterative methods - Precondition by applying one step of Jacobi, Gauss-Seidel, SOR, or SSOR Incomplete factorizations - Perform Gaussian elimination but ignore fill, results in approximate factors A LU or A R T R (more later) Coarse-grid approximations - For a PDE discretized on a grid, a preconditioner can be formed by transferring the solution to a coarser grid, solving a smaller problem, then transferring back (multigrid)

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Tsung-Ming Huang. Matrix Computation, 2016, NTNU Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

1. Fast Iterative Solvers of SLE

1. Fast Iterative Solvers of SLE 1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

Master Thesis Literature Study Presentation

Master Thesis Literature Study Presentation Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

CSC 576: Linear System

CSC 576: Linear System CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information