Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Similar documents
Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo

Iterative Methods. Splitting Methods

JACOBI S ITERATION METHOD

9. Iterative Methods for Large Linear Systems

Algebra C Numerical Linear Algebra Sample Exam Problems

Computational Methods. Systems of Linear Equations

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Numerical Methods - Numerical Linear Algebra

Lecture 18 Classical Iterative Methods

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Draft. Lecture 01 Introduction & Matrix-Vector Multiplication. MATH 562 Numerical Analysis II. Songting Luo

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

The Solution of Linear Systems AX = B

Chapter 7 Iterative Techniques in Matrix Algebra

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Numerical Methods I Non-Square and Sparse Linear Systems

Draft. Lecture 14 Eigenvalue Problems. MATH 562 Numerical Analysis II. Songting Luo. Department of Mathematics Iowa State University

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University

Iterative Solution methods

9.1 Preconditioned Krylov Subspace Methods

Next topics: Solving systems of linear equations

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

COURSE Iterative methods for solving linear systems

Direct Methods for Solving Linear Systems. Matrix Factorization

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

Numerical Linear Algebra

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Computational Linear Algebra

Computational Linear Algebra

EXAMPLES OF CLASSICAL ITERATIVE METHODS

Iterative techniques in matrix algebra

LINEAR SYSTEMS (11) Intensive Computation

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Course Notes: Week 1

CLASSICAL ITERATIVE METHODS

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Numerical Solution Techniques in Mechanical and Aerospace Engineering

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

CAAM 454/554: Stationary Iterative Methods

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

MATH 387 ASSIGNMENT 2

Computational Economics and Finance

Cache Oblivious Stencil Computations

Matrix decompositions

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Process Model Formulation and Solution, 3E4

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Solving Linear Systems

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Linear Algebraic Equations

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

MIDTERM 1 - SOLUTIONS

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Solving Linear Systems of Equations

MATH3018 & MATH6141 Numerical Methods. Dr. Ian Hawke Content provided under a Creative Commons attribution license, CC-BY Ian Hawke.

Iterative Methods for Solving A x = b

Iterative Methods and Multigrid

AMS526: Numerical Analysis I (Numerical Linear Algebra)

1.Chapter Objectives

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Solution of Linear Equations

Scientific Computing: Solving Linear Systems

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Multi-Factor Finite Differences

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

CS 323: Numerical Analysis and Computing

Solving linear equations with Gaussian Elimination (I)

Introduction to Scientific Computing

MATH 3511 Lecture 1. Solving Linear Systems 1

AIMS Exercise Set # 1

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Numerical Linear Algebra

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Classical iterative methods for linear systems

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra and Matrix Inversion

Transcription:

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 1 / 19

Outline 1 Direct Methods 2 Iterative Methods 3 Stationary Methods 4 Convergence of Jacobi, Gauss-Seidel, SOR Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 2 / 19

Solving Linear System Ax b General Properties Two classes of numerical methods: direct and iterative Direct Methods: terminates after finite many steps gives exact solution, except for roundoff errors accuracy governed by condition number examples: Gaussian elimination, LU, QR, SVD, Cholesky, tec. Iterative Methods: produces a sequence of approximate solutions never produces exact solution, except by accident accuracy governed by approximation error examples: fixed point iteration, stationary methods, non-stationary methods. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 3 / 19

Brief Review of Direct Methods We review direct methods briefly. Using Gaussian elimination, LU factorization as examples: Gaussian elimination: how it works? LU factorization; forward substitution; backward substitution. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 4 / 19

Overview of Iterative Methods General Schemes x p0q initial guess iterative process: x pn`1q gpx pnq q (stationary methods, equivalent to fixed point iterations) or x pn`1q g pnq px pnq q (non-stationary methods) For either way: error: e pnq x pnq x ( with x the true solution); e n }e pnq } usual behavior: e n`1 «ce p n for some constant c: convergence of order p. x satisfies x gp xq if g is continuous ô x is a fixed point. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 5 / 19

Example: Rooting Finding One-dimensional Solve fpxq 0 by rewriting it as x gpxq, then iterate x pn`1q gpx pnq q. Assume g is differentiable to all necessary orders (for simplicity), so: e pn`1q x pn`1q x gpx pnq q gp xq g 1 p xqe pnq ` 1 2 g2 p xqpe pnq q 2 ` e n`1 «g 1 p xq e n What if g 1 p xq 0? What about high-dimensional? p linear convergenceq «g 1 p xq n e 0 p converges if g 1 p xq ă 1q Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 6 / 19

Stationary Methods Stationary Methods for Linear Systems Include: Jacobi Iteration Gauss-Seidel Iteration SOR (successive Over-relaxation) Iteration. Basic approach: want to solve Ax b, split A B ` C with B non-singular, easy to invert. converges if ρpb 1 Cq ă 1. pb ` Cqx b Bx Cx ` b x B 1 Cx ` B 1 b Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 7 / 19

Jacobi Iteration Jacobi Iteration A L ` D ` U where L is strictly lower triangular, D is diagonal, and U is strictly upper triangular. pl ` D ` Uqx b Then Jacobi iteration is given as Dx pl ` Uqx ` b x D 1 pl ` Uqx ` D 1 b x pn`1q D 1 pl ` Uqx pnq ` D 1 b Clearly, ρpd 1 pl ` Uqq is relevant for convergence. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 8 / 19

Gauss-Seidel Iteration Gauss-Seidel Iteration pl ` D ` Uqx b pd ` Lqx Ux ` b x pd ` Lq 1 Ux ` pd ` Lq 1 b Then Gauss-Seidel iteration is given as x pn`1q pd ` Lq 1 Ux pnq ` pd ` Lq 1 b Clearly, ρppd ` Lq 1 Uq is relevant for convergence. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 9 / 19

Jacobi v.s. Gauss-Seidel Both methods take Opn 2 q operations per iteration for a full matrix. They usually take Opnq iterations, which give total Opn 3 q operations (no gain compared to Direct Methods). Therefore, such methods are usually for sparse matrices. What is the difference between Jacobi and Gauss-Seidel? Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 10 / 19

SOR Iteration SOR SOR is generalization of Gauss-Seidel. Given x pnq, let x pn`1q GS Gauss-Seidel. So, Define: x pn`1q SOR x pn`1q GS xpnq ` ωpx pn`1q x pnq ` px pn`1q x pnq q GS where ω is the relaxation parameter ω ă 1: under-relaxation ω 1: SOR GS ω ą 1: over-relaxation GS x pnq q p1 ωqx pnq ` ωx pn`1q GS x pn`1q by Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 11 / 19

SOR: Matrix Formulation Look at i-th equation: ř n j 1 a ijx j b i. SOR: x pn`1q i Re-arrange terms: p1 ωqx pnq i ` ω i 1 ÿ rb i a ij x pn`1q a ii j 1 j nÿ j i`1 a ij x pnq j s x pn`1q pd ` ωlq 1 rp1 ωqd ωlsx pnq ` ωpd ωlq 1 b Clearly, ρppd ` ωlq 1 rp1 ωqd ωlsq is relevant for convergence Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 12 / 19

Convergence of Jacobi, Gauss-Seidel, SOR Notations For A L ` D ` U, let Goals J D 1 pl ` Uq G pd ` Lq 1 U Spωq pd ` ωlq 1 rp1 ωqd ωls. explicitly estimate ρ, if possible. if not possible, at least prove ρ ă 1 for SOR, determine optimal ω with minimum ρ. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 13 / 19

Convergence Theorem If J is non-negative (i.e., all entries ě 0), then exactly one of the following occurs: ρpjq ρpgq 0 0 ă ρpgq ă ρpjq ă 1 ρpjq ρpgq 1 1 ă ρpjq ă ρpgq Special Case If all diagonal entries of A are positive, and all others are negative, then Jacobi, Gauss-Seidel either both work or both donot work. If they work, Gauss-Seidel is faster. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 14 / 19

Convergence cont ed Definition A is strictly diagonally row dominant if a kk ą ÿ a kj @ k j k A is strictly diagonally column dominant if a kk ą ÿ a jk @ k j k Theorem If A is strictly diagonally dominant, both Jacobi and Gauss-Seidel converge. Proof. Use row dominant as example. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 15 / 19

Convergence cont ed Theorem Gauss-Seidel converges if A is real symmetric, positive definite. Proof. ρ ă 1? Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 16 / 19

Convergence cont ed Definition Let J D 1 pl ` Uq be the Jacobi iteration matrix, and define Jpαq D 1 pαl ` 1 Uq, α 0. α A is consistently ordered if the eigenvalues of Jpαq are independent of α. Corollary If A is consistently ordered, all eigenvalues of J are in pairs pλ, λq. Proof. observe Jp1q J, Jp 1q J. Theorem If A is block tridiagonal, A is consistently ordered. Proof. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 17 / 19

Convergence cont ed Theorem ρpspωqq ě w 1. SOR can only possibly work if ω P p0, 2q. Proof. Theorem If A is consistently ordered, the eigenvalues µ of J, and λ 0 of Spωq are related by pλ ` ω 1q 2 λω 2 µ 2 Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 18 / 19

Convergence cont ed Theorem If A is consistently ordered, if the eigenvalues of J are real, and ρpjq ă 1, then $ 1 ω ` 1 c & 2 ω2 ρpjq ` ωρpjq 1 ω ` 1 4 ω2 rρpjqs 2, ρpspωqq if 0 ď ω ď ω b, ω 1, % if ω b ď ω ď 2, where In particular, ρpspω b qq ω b 1. 2 ω b 1 ` a1 rρpjqs. 2 It is better to take ω a little larger than ω b, rather than a little smaller. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 19 / 19