Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Similar documents
JACOBI S ITERATION METHOD

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Next topics: Solving systems of linear equations

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

CS 323: Numerical Analysis and Computing

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

The Solution of Linear Systems AX = B

Methods for Solving Linear Systems Part 2

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

Linear Systems. Math A Bianca Santoro. September 23, 2016

Computational Methods. Systems of Linear Equations

Iterative Methods. Splitting Methods

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

MIDTERM 1 - SOLUTIONS

Linear System of Equations

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22

Solving Linear Systems

7.6 The Inverse of a Square Matrix

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Chapter 3. Linear Equations. Josef Leydold Mathematical Methods WS 2018/19 3 Linear Equations 1 / 33

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebraic Equations

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

1 - Systems of Linear Equations

Process Model Formulation and Solution, 3E4

Matrices and systems of linear equations

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Computational Economics and Finance

Numerical Methods for Chemical Engineers

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Revised Simplex Method

Algebra C Numerical Linear Algebra Sample Exam Problems

Scientific Computing

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

22A-2 SUMMER 2014 LECTURE 5

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

COURSE Iterative methods for solving linear systems

MTH 464: Computational Linear Algebra

The purpose of computing is insight, not numbers. Richard Wesley Hamming

Extra Problems: Chapter 1

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Lecture 18 Classical Iterative Methods

Direct Methods for Solving Linear Systems. Matrix Factorization

Chapter 1: Systems of Linear Equations and Matrices

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Numerical methods, midterm test I (2018/19 autumn, group A) Solutions

Section 5.6. LU and LDU Factorizations

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Lecture 9: Elementary Matrices

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Review Questions REVIEW QUESTIONS 71

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Numerical Methods - Numerical Linear Algebra

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

Solution Set 3, Fall '12

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

Solving Linear Systems of Equations

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

Solving Ax = b w/ different b s: LU-Factorization

9.1 Preconditioned Krylov Subspace Methods

Introduction to Iterative Solvers of Linear Systems

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

1 Error analysis for linear systems

Numerical Analysis FMN011

Scientific Computing: An Introductory Survey

Matrix Factorization Reading: Lay 2.5

Linear Systems of n equations for n unknowns

CLASSICAL ITERATIVE METHODS

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MTH 215: Introduction to Linear Algebra

Numerical Solution Techniques in Mechanical and Aerospace Engineering

On the Stability of LU-Factorizations

Gauss-Seidel method. Dr. Motilal Panigrahi. Dr. Motilal Panigrahi, Nirma University

System of Linear Equations

LINEAR SYSTEMS (11) Intensive Computation

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9

9. Iterative Methods for Large Linear Systems

3.2 Gaussian Elimination (and triangular matrices)

Linear Algebra and Matrix Inversion

This can be accomplished by left matrix multiplication as follows: I

Math 313 Chapter 1 Review

CHAPTER 6. Direct Methods for Solving Linear Systems

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Lectures on Linear Algebra for IT

Lecture 10: Determinants and Cramer s Rule

Lecture 7: Introduction to linear systems

TMA4125 Matematikk 4N Spring 2017

Transcription:

Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote the true solution to Ax = b. Traditional Approaches: 1 Gaussian Elimination with backward substitution/ row-echelon form. 2 LU Decomposition of A and solving two smaller linear systems. Goal: ly approximate x by {x n } n 1 based on an initial guess x 0 such that x n x as n.

Why do we need to approximate the solution? 1. Gaussian Elimination and LU decomposition provide the exact solution under the assumption of infinite precision (an impractical assumption when solving large systems). We need to be able to design a method that takes into consideration this issue (Residual Correction Method). 2. Computationally demanding to use direct methods to solve systems with n 10 6 iterative methods need less memory for each solve. 3. Ill-conditioned systems that is, sensitivity of the solution to Ax = b, to a change in b.

Algorithmns for solving linear systems 1 Direct Methods: Gaussian Elimination, LU decomposition. They involve one large computational step. 2 Iterative Methods: Residual Correction Method, Jacobi Method Gauss-Seidel Method (scope of this course). Given error tolerance ε and an initial guess vector x 0, these methods approach the solution gradually. The big advantage of the iterative methods their memory usage, which is significantly less than a direct solver for the same sized problems.

Iterative Methods: Residual Correction Mehtod Motivation: To overcome the unavoidable round-off errors in solving linear systems. Consider: x 800 801 y = 10 x + y = 50. Recall, the exact solution is x = [48010, 48060] T. Assuming 800/801 0.998751560549313, the computed solution x (0) using three digits of significance is inaccurate. Goal: Predict the error in the computed solution and correct the error.

Algorithmns for solving linear systems Definition (Residual) Residual r = b Aˆx where ˆx is the imprecise computed solution. Definition (Error) Error ê = x ˆx where x is the exact solution and ˆx is the imprecise computed solution.

Residual Correction Method Relation between residual and error r = b Aˆx = Ax Aˆx = A(x ˆx) = Aê. This motivates the definition of the residual correction method where the corrected solution say x c is given by x c = ˆx + ê }{{} x ˆx

Residual Correction Method Input: x 0 = ˆx obtained from using Gauss Elimination to solve Ax = b. Tolerance ε > 0 Let r 0 = b Ax 0 Solve for e 0 satisfying Ae 0 = r 0. while e n > ε do x n+1 = x n + e n Let r n+1 = b Ax n+1. Solve for e n+1 satisfying Ae n+1 = r n+1. end

Example Using a computer with four-digit precision, employing Gaussian elimination solve the system x 1 + 0.5x 2 + 0.3333x 3 = 1 0.5x 1 + 0.3333x 2 + 0.25x 3 = 0 0.3333x 1 + 0.25x 2 + 0.2x 3 = 0. yields the solution x 0 = [8.968, 35.77, 29.77] T.

Following the residual correction algorithm, taking ε = 10 4 and given x 0 = [8.968, 35.77, 29.77] T, r 0 = [ 0.005341, 0.004359,.0005344] T e 0 = [0.09216, 0.5442, 0.5239] T x 1 = [9.060, 36.31, 30.29] T r 1 = [ 0.000657, 0.000377,.0000198] T e 1 = [0.001707, 0.013, 0.0124] T x 1 = [9.062, 36.32, 30.30] T

Jacobi Method/ Method of Simultaneous Replacements Consider the following system 9x 1 + x 2 + x 3 = 10 (1) 2x 1 + 10x 2 + 3x 3 = 19 (2) 3x 1 + 4x 2 + 11x 3 = 0 (3) Given an initial guess x (0) = [x 0 1, x 0 2, x 0 3 ]T, we construct a sequence based on the formula: x (k+1) 1 = 10 x 2 (k) + x (k) 3 9 x (k+1) 2 = 19 2x 1 (k) 3x (k) 3 10 x (k+1) 3 = (3x 1 (k) + 4x (k) 2 ). 11

Performance of Jacobi Iteration n x (k) 1 x (k) 2 x (k) 3 Error Ratio 0 0 0 0 2e+0 1 1.1111 1.9 0 1e+0 0.5 2 0.9 1.6778-0.9939 3.22e-1 0.322 3 1.0351 2.0182-0.8556 1.44e-1 0.448 4 0.9819 1.9496-1.0162-5.04e-2 0.349 5 1.0074 2.0085-0.9768 2.32e-2 0.462...... 10 0.9999 1.9997-1.003 2.8e-4 0.382...... 30 1 2-1 3.011e-11 0.447 31 1 2-1 1.35e-11 0.447

Gauss-Seidel Method/ Method of Successive Replacements For the following system, 9x 1 + x 2 + x 3 = 10 2x 1 + 10x 2 + 3x 3 = 19 3x 1 + 4x 2 + 11x 3 = 0 Given an initial guess x (0) = [x 0 1, x 0 2, x 0 3 ]T, we construct a sequence based on the Gauss-Seidel formula: x (k+1) 1 = 10 x 2 (k) + x (k) 3 9 x (k+1) 2 = 19 2x 1 (k+1) 3x (k) 3 10 x (k+1) 3 = (3x 1 (k+1) + 4x (k+1) 2 ). 11

Performance of Gauss Seidel Iteration n x (k) 1 x (k) 2 x (k) 3 Error Ratio 0 0 0 0 2e+0 1 1.1111 1.6778-0.9131 3.22e-1 0.161 2 1.0262 1.9687-0.9958 3.22e-1 0.097 3 1.003 1.9981-1.0001 1.44e-1 0.096 4 1.002 2.000-1.00001-2.24e-4 0.074 5 1 2-1 1.65e-5 0.074 6 1 2-1 2.58e-6 0.155

General Schema: Towards Error Analysis In order to examine the error analysis, we need to express the two iterative methods in a more compact form. This can be achieved by expressing the formulas for Jacobi and Gauss Seidel using vector-matrix format. Theorem (Vector-Matrix Format) Every linear system can be expressed in the form Ax = b Nx = b + Px, A = N P, where N is an nonsingular (invertible) matrix. Furthermore, any iteration method can be described as: Nx (k+1) = b + Px k, k = 0, 1, (4)

Work Out Example Example Write the Jacobi method applied to the linear system (1) (3) using the vector-matrix format (4): Nx (k+1) = b + Px k, b = [10, 19, 0] T. x (k+1) 1 = 10 x 2 (k) + x (k) 3 9 x (k+1) 2 = 19 2x 1 (k) 3x (k) 3 10 x (k+1) 3 = (3x 1 (k) + 4x (k) 2 + 11). 11

Solution 1. Multiply first equation by 9, second by 10 and third by 11. 9x 1 (k+1) = 10 x 2 (k) + x 3 (k) 10x 2 (k+1) = 19 2x 1 (k) 3x 3 (k) 11x 3 (k+1) = 0 (3x 1 (k) + 4x 2 (k) ). 2. Recall we want to express the formula in the form Nx (k+1) = b + Px k, b = [10, 19, 0] T. We already have b. We now need to derive the matrices N and P. 3. Obtain N first. 9 0 0 Nx (k+1) = 0 10 0 0 0 11 LHS of the linear system above! x 1 (k+1) x 2 (k+1) x 3 (k+1) = 9x 1 (k+1) 10x 2 (k+1) 11x 3 (k+1)

Solution Observations 0 1 1 9 0 0 P = 2 0 3, N = 0 10 0. 3 4 0 0 0 11 For the Jacobi iterative method, the matrices N J and P J stay unchanged! Notice the zero diagonal entries for P. The diagonal entries of N and A are the same! Easy way to obtain P is P = N A

To summarize... Since our next task is to extract the N and P characterizing the Gauss Seidel Method, we let N J and P J to denote the matrices charactering the vector-matrix format (4) for the Jacobi Iteration. That is, For Jacobi Method solving Ax = b, N J x (k+1) = b + P J x (k), k = 1, 2, 1 extract the diagonal of A and denote it by N J, 2 obtain P J using P J = N J A.

Example (Harder than the Jacobi matrices!) Example Write the Gauss Seidel method applied to the linear system (1) (3) using the vector-matrix format (4): N GS x (k+1) = b + P GS x k, b = [10, 19, 0] T. Recall, x (k+1) 1 = 10 x 2 (k) + x (k) 3 9 x (k+1) 2 = 19 2x 1 (k+1) 3x (k) 3 10 x (k+1) 3 = (3x 1 (k+1) + 4x (k+1) 2 ). 11

Solution We start in the usual way of multiplying the first equation by 9, second equation by 10 and third by 11 to obtain 9x 1 (k+1) = 10 x 2 (k) + x 3 (k) 10x 2 (k+1) = 19 2x 1 (k+1) 3x 3 (k) 11x 3 (k+1) = (3x 1 (k+1) + 4x 2 (k+1) ). Remember all terms with superscript k + 1 belong to the LHS thus, rearranging gives us 9x 1 (k+1) = 10 x 2 (k) + x 3 (k) 2x 1 (k+1) + 10x 2 (k+1) = 19 3x 3 (k) 3x 1 (k+1) 4x 2 (k+1) 11x 3 (k+1) = 0. We already have b = [10, 19, 0] T. What is N GS and P GS?

9x 1 (k+1) = 10 x 2 (k) + x 3 (k) 2x 1 (k+1) + 10x 2 (k+1) = 19 3x 3 (k) 3x 1 (k+1) 4x 2 (k+1) 11x 3 (k+1) = 0. It is easier to obtain P GS in this case! 0 1 1 P GS = 0 0 3, 0 0 0 x (k) 2 + x (k) 3 since P GS x (k) = 3x (k) 3 0 (verify!)

0 1 1 9 1 1 9 0 0 N GS = P GS +A = 0 0 3 + 2 10 3 = 2 10 0 0 0 0 3 4 11 3 4 11 }{{} Lower half of A! Observations For the Gauss Seidel iterative method too, the matrices N J and P J stay unchanged! Notice the zero diagonal entries for P GS too! Common feature with Jacobi matrices! The diagonal entries of N GS and A are the same! Common feature with Jacobi matrices!

Convergence Analysis Theorem (Convergence Condition) For any iterative method Nx (k+1) = b + Px (k), to solve Ax = b, the condition for convergence is N 1 P < 1 for all choices of initial guess x 0 and b!

Observations For Jacobi method, this condition is equivalent to requiring n a ij < a ii, i = 1, n. j=1,j i A matrix A = (a ij ) n i,j=1 satisfying the above condition is called diagonally dominant.