CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Similar documents
Gaussian Elimination and Back Substitution

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Review Questions REVIEW QUESTIONS 71

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

2.1 Gaussian Elimination

Scientific Computing

Direct Methods for Solving Linear Systems. Matrix Factorization

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9

CHAPTER 6. Direct Methods for Solving Linear Systems

Linear Algebra and Matrix Inversion

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Systems of n equations for n unknowns

Roundoff Analysis of Gaussian Elimination

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

MATH 3511 Lecture 1. Solving Linear Systems 1

The Solution of Linear Systems AX = B

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

9. Numerical linear algebra background

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Linear Algebraic Equations

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

A Review of Matrix Analysis

Numerical Linear Algebra

MTH 215: Introduction to Linear Algebra

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

CPE 310: Numerical Analysis for Engineers

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

1 Determinants. 1.1 Determinant

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b.

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

MATRICES. a m,1 a m,n A =

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

This can be accomplished by left matrix multiplication as follows: I

Online Exercises for Linear Algebra XM511

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

14.2 QR Factorization with Column Pivoting

Graduate Mathematical Economics Lecture 1

CSE 160 Lecture 13. Numerical Linear Algebra

POLI270 - Linear Algebra

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

9. Numerical linear algebra background

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

MAT 610: Numerical Linear Algebra. James V. Lambers

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Fundamentals of Engineering Analysis (650163)

22A-2 SUMMER 2014 LECTURE 5

Dense LU factorization and its error analysis

Lectures on Linear Algebra for IT

Lesson U2.1 Study Guide

Next topics: Solving systems of linear equations

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Review of Vectors and Matrices

Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang

Undergraduate Mathematical Economics Lecture 1

ECE133A Applied Numerical Computing Additional Lecture Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Scientific Computing: An Introductory Survey

EE5120 Linear Algebra: Tutorial 1, July-Dec Solve the following sets of linear equations using Gaussian elimination (a)

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Applied Linear Algebra in Geoscience Using MATLAB

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Linear Systems and Matrices

Linear System of Equations

MAT 2037 LINEAR ALGEBRA I web:

Scientific Computing: Dense Linear Systems

Elementary maths for GMT

Matrix decompositions

Math 240 Calculus III

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Solving linear systems (6 lectures)

Numerical Linear Algebra

Lecture Notes in Linear Algebra

Matrix decompositions

MH1200 Final 2014/2015

Maths for Signals and Systems Linear Algebra in Engineering

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

MA2501 Numerical Methods Spring 2015

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Introduction to Matrices

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Chapter 4. Solving Systems of Equations. Chapter 4

Matrix Factorization and Analysis

Computational Linear Algebra

Computational Methods. Systems of Linear Equations

Transcription:

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently arise in practice First of all, relationships among numbers that are known to be true in exact arithmetic do not necessarily hold when using floating-point arithmetic For example, suppose that x > y > 0, and that a > 0 Then, in exact arithmetic, ax > ay > 0, but in floating-point arithmetic, we can only assume that ax ay 0 Second, the order in which floating-point arithmetic operations are performed can drastically affect the result For example, suppose that we want to compute e x, where x > 0 Using the Taylor series for e x, we know that e x = x + x2 2 x3 3! + but for sufficiently large x, this means obtaining small numbers by subtracting larger ones, and therefore the computation is susceptible to a phenomenon known as catastrophic cancellation, in which subtracting numbers that are nearly equal causes the loss of significant digits (since such digits in the result of the subtraction are equal to zero) An alternative approach is to compute e x = + x + x2 2 + x3 3! + which avoids this problem As a rule, it is best to try to avoid subtractions altogether, instead trying to add numbers that are guaranteed to be the same sign For example, given the quadratic equation x 2 + bx + c = 0, we can compute the roots by applying the quadratic formula as follows: x + = b + sgn( b) b 2 4c, x = c 2 x + 2 Rank- Updating and the Inverse Suppose that we have solved the problem Ax = b and we wish to solve the perturbed problem (A + uv )y = b Such a perturbation is called a rank-one update of A, since the matrix uv has rank As an example, we might find that there was an error in the element a and we update it with the value ā We can accomplish this update by setting Ā = A + (ā a )e e 0, e = 0 Date: October 8, 2005, version 0 Notes originally due to James Lambers Minor editing by Lek-Heng Lim

For a general rank-one update, we can use the Sherman-Morrison formula, which we will derive here Multiplying through the equation (A + uv )y = b by A yields (I + A uv )y = A b = x We therefore need to find (I + wv ) where w = A u We assume that (I + wv ) is a matrix of the form (I + σwv ) where σ is some constant From the relationship we obtain (I + wv )(I + σwv ) = I σwv + wv + σwv wv = 0 However, the quantity v w is a scalar, so this simplifies to which yields (σ + + σv w)wv = 0 σ = + v w It follows that the solution y to the perturbed problem is given by and the perturbed inverse is given by y = (I + σwv )x = x + σv x w (A + uv ) = (I + A uv ) A ( = I = A + v w wv ) A + v A u A uv A An efficient algorithm for solving the perturbed problem (A + uv )y = b can therefore proceed as follows: () Solve Ax = b (2) Solve Aw = u (3) Compute σ = +v w (4) Compute y = x + σ(v x)w An alternative approach is to note that which yields (A + uv ) = [A(I + A uv )] = (I + σa uv )A = A + σa uv A (A + uv ) b = A (I + σuv A )b = A (b + σ(v A b)u) and therefore we can solve (A + uv )y = b by solving a problem of the form Ax = b where the right-hand side b is perturbed 2

We often wish to solve 3 Gaussian Elimination Ax = b where A is an m n matrix and b is an m-vector For now, we assume that m = n and that A has rank n If we can write A = P Q then we can solve the system Ax = P Qx = b by solving P y = b Qx = y Therefore we would like to find such a decomposition where the above systems are simple to solve We now discuss a few scenarios where this is the case () If the matrix A is diagonal, then the system Ax = b has the solution x i = b i /a ii for i =,, n The solution can be computed in only n divisions Furthermore, each component of x can be computed independently, and therefore the algorithm can be parallelized (2) If AA = I, then Ax = b can be solved simply by computing the matrix-vector product x = A b This requires only O(n 2 ) multiplications and additions, and can also be parallelized (3) If A is a lower triangular matrix, ie if a ij = 0 for i < j, then the system of equations Ax = b takes the form a x = b a 2 x + a 22 x 2 = b 2 a n x + + a nn x n = b n which can be solved by the process of forward substitution x = b /a x 2 = (b 2 a 2 x )/a 22 x n = ( b n n j= a njx j ) /ann This algorithm cannot be parallelized, since each component x i depends on x j for j < i, but it is still efficient, as it requires only O(n 2 ) multiplications and additions In the case where A is an upper triangular matrix, ie a ij = 0 whenever i > j, a similar process known as back substitution can be used Note that the solution method for the problem Ax = b depends on the structure of A A may be a sparse or dense matrix, or it may have one of many well-known structures, such as being a banded matrix, or a Hankel matrix We will consider the general case of a dense, unstructured matrix A, and obtain a decomposition A = LU, where L is lower triangular and U is upper triangular This decomposition is achieved using Gaussian elimination We write out the system Ax = b as a x + + a n x n = b a 2 x + + a 2n x n = b 2 a n x + + a nn x n = b n We proceed by multiplying the first equation by a 2 /a and adding it to the second equation, and in general multiplying the first equation by a i /a and adding it to equation i We obtain 3

the following equivalent system a x + a 2 x 2 + + a n x n = b 0x + a 22 x 2 + + a 2n x n = b 2 0x + a n2 x 2 + + a nnx n = b n Continuing in this fashion, adding multiples of the second equation to each subsequent equation to make all elements below the diagonal equal to zero, we obtain an upper triangular system This process of transforming A to an upper triangular matrix U is equivalent to multiplying A by a sequence of matrices to obtain U Specifically, we have M A = A 2 where a a 2 a n 0 a (2) 22 a (2) 2n A 2 = 0 a (2) n2 a (2) nn and 0 l 2 M = 0, l i = a i a l n Similarly, if we define M 2 by 0 M 2 = 0 l 32, l i2 = a(2) i2 a (2) 22 0 l n2 then In general, we have and or, equivalently, a a 2 a 3 a n 0 a (2) 22 a (2) 23 a (2) 2n M 2 A 2 = A 3 = 0 0 a (3) 33 a (3) 3n 0 0 a (3) n3 a (3) nn 0 M k = l k+,k 0 l nk u u n 0 u 22 u 2n M n M n 2 M A = A n 0 0 u nn A = M M 2 M n U 4

It turns out that M j To see this, consider the product is very easy to compute We claim that 0 M l 2 = 0 l n M M = 0 0 l 2 0 l 2 0 l n l n which can easily be verified to be equal to the identity matrix In general, we have 0 M k = l k+,k 0 l nk Now, consider the product 0 M M 2 l 2 0 = 0 0 l 32 l n 0 l n2 l 2 = l 32 l n l n2 It can be shown that M M 2 M n = l 2 l 32 l n l n2 l n,n It follows that under proper circumstances, we can write A = LU where l u 2 u n L = l 0 u 22 u 2n 32, U = 0 0 u nn l n l n2 l n,n 5

Given this decomposition, we can easily compute the determinant of A: n det A = det LU = det L det U = What exactly are proper circumstances? We must have a (k) kk 0, or we cannot proceed with the decomposition For example, if A = 0 3 7 2 or A = 3 4 2 6 4 2 9 3 7 2 Gaussian elimination will fail In the first case, it fails immediately; in the second case, it fails after the subdiagonal entries in the first column are zeroed, and we find that a (k) 22 = 0 In general, we must have det A ii 0 for i =,, n where a a i A ii = a i a ii for the LU factorization to exist How can we obtain the LU factorization for a general non-singular matrix? If A is nonsingular, then some element of the first column must be nonzero If a i 0, then we can interchange row i with row and proceed This is equivalent to multiplying A by a permutation matrix Π that interchanges row and row i: 0 0 0 0 Π = 0 0 0 i= u ii Thus M Π A = A 2 Then, since A 2 is nonsingular, some element of column 2 of A 2 below the diagonal must be nonzero Proceeding as before, we compute M 2 Π 2 A 2 = A 3, where Π 2 is another permutation matrix Continuing, we obtain A = (M n Π n M Π ) U It can easily be shown that ΠA = LU where Π is a permutation matrix Most often, Π i is chosen so that row i is interchanged with row j, where a (i) ij = max i j n a (i) ij This guarantees that l ij This strategy is known as partial pivoting Another common strategy, complete pivoting, uses both row and column interchanges to ensure that at step i of the algorithm, the element a ii is the largest element in absolute value from the entire submatrix obtained by deleting the first i rows and columns Often, however, other criteria is used to guide pivoting, due to considerations such as preserving sparsity Department of Computer Science, Gates Building 2B, Room 280, Stanford, CA 94305-9025 E-mail address: golub@stanfordedu 6