Direct Methods for solving Linear Equation Systems

Similar documents
2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Linear Algebraic Equations

Process Model Formulation and Solution, 3E4

Linear Systems of n equations for n unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Next topics: Solving systems of linear equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Lecture 4: Linear Algebra 1

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Scientific Computing

Chapter 9: Gaussian Elimination

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Numerical Analysis Fall. Gauss Elimination

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear System of Equations

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

Characteristic finite-difference solution Stability of C C (CDS in time/space, explicit): Example: Effective numerical wave numbers and dispersion

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Solving Linear Systems of Equations

MATH 3511 Lecture 1. Solving Linear Systems 1

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Iterative Methods. Splitting Methods

2.29 Numerical Fluid Mechanics Fall 2009 Lecture 13

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

A Review of Matrix Analysis

Matrix decompositions

Matrix decompositions

CE 601: Numerical Methods Lecture 7. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

CS 323: Numerical Analysis and Computing

lecture 2 and 3: algorithms for linear algebra

Linear Algebraic Equations

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 13

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

TMA4125 Matematikk 4N Spring 2017

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Review Questions REVIEW QUESTIONS 71

Engineering Computation

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

The Solution of Linear Systems AX = B

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Shan Feng

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

5 Solving Systems of Linear Equations

Solving Systems of Linear Equations

Scientific Computing: Dense Linear Systems

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Solution of Linear Equations

1 Number Systems and Errors 1

Introduction to Mathematical Programming

Numerical Analysis FMN011

Solving Linear Systems of Equations

Direct Methods for Solving Linear Systems. Matrix Factorization

9.1 Preconditioned Krylov Subspace Methods

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Numerical Linear Algebra

JACOBI S ITERATION METHOD

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Numerical Analysis: Solving Systems of Linear Equations

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 2

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Solution of Linear systems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 20

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

lecture 3 and 4: algorithms for linear algebra

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Numerical Algorithms. IE 496 Lecture 20

4.2 Floating-Point Numbers

22A-2 SUMMER 2014 LECTURE 5

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

2.1 Gaussian Elimination

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

nonlinear simultaneous equations of type (1)

Numerical Methods I: Numerical linear algebra

Systems of Linear Equations

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Introduction to Applied Linear Algebra with MATLAB

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

9. Iterative Methods for Large Linear Systems

Introduction to Numerical Analysis

Transcription:

REVIEW Lecture 5: Systems of Linear Equations Spring 2015 Lecture 6 Direct Methods for solving Linear Equation Systems Determinants and Cramer s Rule Gauss Elimination Algorithm Forward Elimination/Reduction to Upper Triangular System Back-Substitution Number of Operations: Numerical implementation and stability Partial Pivoting Equilibration Full pivoting Well suited for dense matrices 2 O n n O n 3 3 2 2 ( ) ( ) Issues: round-off, cost, does not vectorize/parallelize well 3 2 2 O( n pn ) O( pn ) Special cases, Multiple RHSs, Operation count PFJL Lecture 6, 1

TODAY s Lecture: Systems of Linear Equations II Direct Methods Cramer s Rule Gauss Elimination Algorithm Numerical implementation and stability Partial Pivoting Equilibration Full Pivoting Well suited for dense matrices Issues: round-off, cost, does not vectorize/parallelize well Special cases, Multiple right hand sides, Operation count LU decomposition/factorization Error Analysis for Linear Systems Condition Number Special Matrices: Tri-diagonal systems Iterative Methods Jacobi s method Gauss-Seidel iteration Convergence PFJL Lecture 6, 2

Reading Assignment Chapters 9 and 10 of Chapra and Canale, Numerical Methods for Engineers, 2006/2010/204. Any chapter on Solving linear systems of equations in references on CFD that we provided. For example: chapter 5 of J. H. Ferziger and M. Peric, Computational Methods for Fluid Dynamics. Springer, NY, 3 rd edition, 2002 PFJL Lecture 5, 3

LU Decomposition/Factorization: LU Decomposition: Separates time-consuming elimination for A from that for b / B The coefficient Matrix A is decomposed as where and is a lower triangular matrix is an upper triangular matrix Then the solution is performed in two simple steps 1. 2. Forward substitution Back substitution How to determine and? PFJL Lecture 6, 4

LU Decomposition / Factorization via Gauss Elimination, assuming no pivoting needed i j Gauss Elimination (GE): iteration eqns. for the reduction at step k are This gives the final changes occurring in reduction steps k = 1 to k = i-1 After reduction step i-1: Above and on diagonal: Unchanged after step i-1: Below diagonal: Become and remain 0 in step j: PFJL Lecture 6, 5

LU Decomposition / Factorization via Gauss Elimination, assuming no pivoting needed i j Gauss Elimination (GE): iteration eqns. for the reduction at step k are This gives the final changes occurring in reduction steps k = 1 to k = i-1 After reduction step i-1: Above and on diagonal: Unchanged after step i-1: Below diagonal: Become and remain 0 in step j: Now, to evaluate the changes that accumulated from when one started the elimination, let s try to sum this iteration equation, from: 1 to i-1 for above and on diagonal 1 to j for below diagonal As done in class, you can also sum up to an arbitrary r and see which terms remain. PFJL Lecture 6, 6

LU Decomposition / Factorization via Gauss Elimination, assuming no pivoting needed i j Gauss Elimination (GE): iteration eqns. for the reduction at step k are This gives the final changes occurring in reduction steps k = 1 to k = i-1 After reduction step i-1: these step-k eqns. from (k=1 to i-1) => Gives the total change above diagonal: Above and on diagonal: Unchanged after step i-1: Below diagonal: this step-k eqns. from (k=1 to j) => Gives the total change below diagonal: Become and remain 0 in step j: PFJL Lecture 6, 7

LU Decomposition / Factorization via Gauss Elimination, assuming no pivoting needed j Summary: summing the changes in reduction steps k = 1 to k = i-1 : i We obtained: Total change above diagonal We obtained: Total change below diagonal (1) (2) After reduction step i-1: Above and on diagonal: Now, if we define: and use them in equations (1) and (2) => Unchanged after step i-1: Below diagonal: Become and remain 0 in step j: PFJL Lecture 6, 8

LU Decomposition / Factorization via Gauss Elimination, assuming no pivoting needed Result seems to be a Matrix product : Sum stops at diagonal Below diagonal Lower triangular Upper triangular i = x 0 k 0 Above diagonal j k i = x 0 k 0 j k PFJL Lecture 6, 9

LU Decomposition / Factorization via Gauss Elimination, assuming no pivoting needed GE reduction directly yields LU factorization Compact storage: no need for additional memory (the unitary diagonal of L does not need to be stored) Lower triangular = Upper triangular = Number of Operations for LU? Lower diagonal implied (referred to as the Doolittle decomposition) PFJL Lecture 6, 10

LU Decomposition / Factorization via Gauss Elimination, assuming no pivoting needed GE Reduction directly yields LU factorization Compact storage Lower triangular = Upper triangular = Lower diagonal implied Number of Operations for LU? Same as Gauss Elimination: less in Elimination phase (no RHS operations), but more in double back-substitution phase PFJL Lecture 6, 11

Pivoting in LU Decomposition / Factorization Before reduction, step k Forward substitution, step k Interchange rows i and k Pivoting if ( k ) a nn, To do this interchange of rows i and k, use a pivot vector: or else Pivot element vector In code, use b(p(k)), which amounts to: If Dummy var. PFJL Lecture 6, 12

LU Decomposition / Factorization: Variations Doolittle decomposition: m ii =1 (implied but could be stored in L) Crout decomposition: Directly impose diagonal of U equal to 1 s (instead of L) Sweeps both by columns and rows (columns for L and rows for U) Reduce storage needs Each element of A only employed once Matrix inverse: AX=I => (LU)X=I Numbers of ops: 2 n 2 8 O for, n 2 n pn pn 3 p n n 3 3 Forward Backward LU Decomp. Substitution Substitution 3 3 3 2 2 3 PFJL Lecture 6, 13

Recall Lecture 3: The Condition Number The condition of a mathematical problem relates to its sensitivity to changes in its input values A computation is numerically unstable if the uncertainty of the input values are magnified by the numerical method Considering x and f(x), the condition number is the ratio of the relative error in f(x) to that in x. Using first-order Taylor series f ( x) f ( x) f '( x)( x x) Relative error in f(x): Relative error in x: f (x) f (x) f '(x)(x x) f (x) f (x) ( x x) x Condition Nb = Ratio of relative errors: K p x f '(x) f( x) PFJL Lecture 6, 14

Linear Systems of Equations Error Analysis Function of one variable Linear systems How is the relative error of x dependent on errors in b? Condition number Example Using MATLAB with different b s (see tbt8.m): The condition number K is a measure of the amplification of the relative error by the function f(x) Small changes in b give large changes in x The system is ill-conditioned PFJL Lecture 6, 15

Linear Systems of Equations: Norms Vector and Matrix Norms: Evaluation of Condition Numbers requires use of Norms Properties: Sub-multiplicative / Associative Norms (n-by-n matrices with such norms form a Banach Algebra/space) PFJL Lecture 6, 16

Examples of Matrix Norms Maximum Column Sum Maximum Row Sum The Frobenius norm (also called Euclidean norm), which for matrices differs from: The l-2 norm (also called spectral norm) PFJL Lecture 6, 17

Linear Systems of Equations Error Analysis: Perturbed Right-hand Side Vector and Matrix Norms Perturbed Right-hand Side implies Properties Subtract original equation Relative Error Magnification Condition Number PFJL Lecture 6, 18

Linear Systems of Equations Error Analysis: Perturbed Coefficient Matrix Vector and Matrix Norms Perturbed Coefficient Matrix implies Subtract unperturbed equation Properties (Neglect 2 nd order) Relative Error Magnification Condition Number PFJL Lecture 6, 19

Example: Ill-Conditioned System 4-digit Arithmetic n=4 a = [ [1.0 1.0]' [1.0 1.0001]'] b= [1 2]' tbt6.m Using Cramer s rule: ai=inv(a); a_nrm=max( abs(a(1,1)) + abs(a(1,2)), abs(a(2,1)) + abs(a(2,2)) ) ai_nrm=max( abs(ai(1,1)) + abs(ai(1,2)), abs(ai(2,1)) + abs(ai(2,2)) ) k=a_nrm*ai_nrm r=ai * b x=[0 0]; m21=a(2,1)/a(1,1); a(2,1)=0; a(2,2) = radd(a(2,2),-m21*a(1,2),n); b(2) = radd(b(2),-m21*b(1),n); x(2) x(1) x' = b(2)/a(2,2); = (radd(b(1), -a(1,2)*x(2),n))/a(1,1); Ill-conditioned system PFJL Lecture 6, 20

Example: Better-Conditioned System 4-digit Arithmetic n=4 a = [ [0.0001 1.0]' [1.0 1.0]'] b= [1 2]' tbt7.m Using Cramer s rule: ai=inv(a); a_nrm=max( abs(a(1,1)) + abs(a(1,2)), abs(a(2,1)) + abs(a(2,2)) ) ai_nrm=max( abs(ai(1,1)) + abs(ai(1,2)), abs(ai(2,1)) + abs(ai(2,2)) ) k=a_nrm*ai_nrm r=ai * b x=[0 0]; m21=a(2,1)/a(1,1); a(2,1)=0; a(2,2) = radd(a(2,2),-m21*a(1,2),n); b(2) = radd(b(2),-m21*b(1),n); x(2) x(1) x' = b(2)/a(2,2); = (radd(b(1), -a(1,2)*x(2),n))/a(1,1); Relatively Well-conditioned system PFJL Lecture 6, 21

Recall Lecture 3: The Condition Number The condition of a mathematical problem relates to its sensitivity to changes in its input values A computation is numerically unstable if the uncertainty of the input values are magnified by the numerical method Considering x and f(x), the condition number is the ratio of the relative error in f(x) to that in x. Using first-order Taylor series f Relative error in f(x): Relative error in x: f (x) f (x) f '(x)(x x) f (x) f (x) ( x x) x Condition Nb = Ratio of relative errors: K p x f '(x) f( x) PFJL Lecture 6, 14

MIT OpenCourseWare http://ocw.mit.edu Spring 2015 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.