The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

Similar documents
CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

ACM106a - Homework 2 Solutions

Matrix decompositions

Matrix decompositions

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

1 GSW Sets of Systems

Solution Set 7, Fall '12

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Lecture 2 Decompositions, perturbations

Next topics: Solving systems of linear equations

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Roundoff Analysis of Gaussian Elimination

Math/CS 466/666: Homework Solutions for Chapter 3

Sherman-Morrison-Woodbury

Lecture 9: Elementary Matrices

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

1.5 Gaussian Elimination With Partial Pivoting.

Matrix Factorization and Analysis

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Review Questions REVIEW QUESTIONS 71

3.2 Gaussian Elimination (and triangular matrices)

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

1111: Linear Algebra I

Numerical Linear Algebra

This can be accomplished by left matrix multiplication as follows: I

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Direct Methods for Solving Linear Systems. Matrix Factorization

CSE 160 Lecture 13. Numerical Linear Algebra

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

1 Last time: linear systems and row operations

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Algebraic Equations

Gaussian Elimination and Back Substitution

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Numerical Linear Algebra

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MAT 343 Laboratory 3 The LU factorization

Numerical Linear Algebra

Solving linear systems (6 lectures)

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

MATRICES. a m,1 a m,n A =

Solving Dense Linear Systems I

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

5.6. PSEUDOINVERSES 101. A H w.

= W z1 + W z2 and W z1 z 2

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Computational Methods. Systems of Linear Equations

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

The Solution of Linear Systems AX = B

1 Inner Product and Orthogonality

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Matrix decompositions

Linear Analysis Lecture 16

REU 2007 Apprentice Class Lecture 8

Linear Algebra and Matrix Inversion

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

2.1 Gaussian Elimination

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

MH1200 Final 2014/2015

CHAPTER 6. Direct Methods for Solving Linear Systems

Numerical Methods I: Numerical linear algebra

Fundamentals of Engineering Analysis (650163)

CS 323: Numerical Analysis and Computing

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Scientific Computing

14.2 QR Factorization with Column Pivoting

1 The linear algebra of linear programs (March 15 and 22, 2015)

Math 502 Fall 2005 Solutions to Homework 3

Computational Linear Algebra

MATH 3511 Lecture 1. Solving Linear Systems 1

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

SPRING OF 2008 D. DETERMINANTS

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

Fiedler s Theorems on Nodal Domains

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Scientific Computing: Dense Linear Systems

LINEAR SYSTEMS (11) Intensive Computation

MTH 464: Computational Linear Algebra

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

MA2501 Numerical Methods Spring 2015

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Transcription:

8.409 The Behavior of Algorithms in Practice //00 Lecture 4 Lecturer: Dan Spielman Scribe: Matthew Lepinski A Gaussian Elimination Example To solve: [ ] [ ] [ ] x x First factor the matrix to get: [ ] [ ] [ ] [ ] 0 x 0 x Next solve: [ ] [ ] [ ] 0 y y To get: y y Finally solve: [ ] [ ] [ ] x y 0 x y To get: x 0 x Which is the solution to the original system. When viewed this way, Gaussian elimination is just LU factorization of a matrix followed by some simple substitutions.

Floating Point We consider a model of floating point computation where it is possible to represent numbers of the form: m b t Where m is an integer such that t m t. In this model, t is the precision of the floating point representation. In what follows, we will ignore bounds on b. Let mach be defined so that + mach is the smallest number greater than which the machine can represent. The goal of a floating point computation is to provide an answer which is correct to within a factor of ( ± mach ). Consider the example in the previous section where < mach. In this case, we will compute the LU factorization as: [ ] [ ] 0 0 This means that using Gaussian Elimination (with no pivoting) we will actually be solving the system: [ ] [ ] [ ] x 0 x And so will get the solution: x x Which is nowhere near the correct solution to the original system. Note: The matrix in the previous example is well conditioned, having a condition number of about.68, but we still fail miserably when doing Gaussian Elimination on this matrix. Exercise: Do the same thing for the system: [ ] [ ] [ ] x x You should observe that permuting the rows/columns of the matrix (pivoting) allows you to solve the system with Gaussian Elimination even when < mach.

Theorem (Wilkinson) If you solve Ax b computing L, ˆ U ˆ and x, ˆ then there exists a δa such that (A + δa)ˆx b and ( ) δa L ˆ nmach 3 + 5 ˆ U The problem with the previous example is that although A had small entries, U had a very large entry. When doing Gaussian Elimination, we say that the growth factor is: U Partial Pivoting Idea: Permute the rows but not the columns such that the pivot is the largest entry in its column. Note: This is the technique used by Matlab. At step j in the Gaussian Elimination, permute the rows so that a j,j a i,j for all i > j. This guarantees that L. However, in the worst case, partial pivoting yields a growth factor of n for an n by n matrix. Complete Pivoting Idea: Permute the rows and the columns such that the pivot is the largest entry in the matrix. Wilkinson proved that Complete Pivoting guarantees that: U n log(n) However, it is conjectured that the growth factor can be upper bounded by something closer to n. 3

Unfortunately, using complete pivoting requires about twice as many floating point operations as partial pivoting. Therefore, since partial pivoting works well in practice, complete pivoting is hardly ever used. Sometimes You Don t Need to Pivot. If A is diagonally dominant then it is possible to bound the size of the entries in L.. If A is positive definite then it is possible to bound the size of the entries in U. Having both of these conditions is very nice. In practice, both of these conditions show up quite often. Definition A matrix, A, is (column wise) diagonally dominant if for all j, a j,j ij a i,j Theorem 3 If A is (column wise) diagonally dominant, then l i,j. Equivalently, if A is diagonally dominant then one does not permute when using partial pivoting. Proof After the k th round of Gaussian Elimination, we refer to the n k by n k matrix in the lower left corner as A (k). If suffices to prove that all of the A (k) are diagonally dominant. We will show that A () is diagonally dominant. A straightforward inductive argument can be used to show all of the A (k) are diagonally dominant. Claim 4 A () is diagonally dominant. Let [ ] w A v B Then one step of Gaussian Elimination yields [ ] [ ] 0 w A v vw I 0 B 4

vw Therefore, A () B. Let a () i,j be the i, j entry of A (). It suffices to show that () () a j,j a i,j We know that i,ij w j () b i,j v iw j a i,j b i,j + v i i,ij i,ij i,ij i,ij Since A is diagonally dominant, it follows that i,ij () w j w j a i,j ( b j,j w j ) + ( vj ) b j,j vj w j v j () b j,j a j,j Definition 5 A is positive definite if A is symmetric and for all x, xax T > 0 Exercise: The above definition is equivalent to the following:. All eigenvalues of A are positive.. All principal minors of A are positive definite. Exercise: Eigenvalues of A..n,..n interlace the eigenvalues of A. Additionally, the following two facts are implied by Item # above: Diagonal entries of A are positive. The entry with the largest absolute value lies on a diagonal. Theorem 6 If A is positive definite, then A (k). Note: This implies that U. 5

Proof We first prove that the A (k) are positive definite. We will show that A () is positive definite. A straightforward inductive argument can be used to show all of the A (k) are diagonally dominant. It is easy to see that A () is symmetric and so it suffices to show that for all x, xax T > 0. Let [ ] v T A v B and recall that Therefore, A () B vv xax T (x... x n )A () (x... x n ) T x + x v i x i + b i,j x i x j (b i,j v iv j )xi x j Therefore, by cancellation, xax T (x... x n )A () (x... x n ) T v i x i x i + This means that for any x... x n, setting v i x i x T i i,j i,j i yields xax T (x... x n )A () (x... x n ) T. Therefore, if A () is not positive definite, then neither is A. Now all that remains to be shown is that A (). This will follow from two facts that were previously observed about positive definite matrices. (We repeat them here for convenience. Diagonal entries of A are positive. The entry with the largest absolute value lies on a diagonal. () Therefore, we know that the largest entry of A () is a j,j for some j. i 0 < a () b j,j v j j,j bj,j a j,j 6

A Smoothed Analysis Theorem Theorem 7 Let A be any matrix with and let A A + G where G is a Gaussian random matrix with variance σ. Then P rob [ U > 4n 7 log(n)/] < () σ where A LU. [ L ) > 4n 7 log( P rob log(n)/] < () σ We will prove this theorem during the next lecture. 7