CHAPTER 6. Direct Methods for Solving Linear Systems

Similar documents
1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Linear Algebra and Matrix Inversion

MATH2210 Notebook 2 Spring 2018

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Numerical Linear Algebra

Fundamentals of Engineering Analysis (650163)

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Gaussian Elimination and Back Substitution

Linear Systems and Matrices

A Review of Matrix Analysis

Elementary maths for GMT

POLI270 - Linear Algebra

1 Determinants. 1.1 Determinant

Direct Methods for Solving Linear Systems

2.1 Gaussian Elimination

Linear Equations and Matrix

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Math 240 Calculus III

MATH 3511 Lecture 1. Solving Linear Systems 1

Matrix & Linear Algebra

Undergraduate Mathematical Economics Lecture 1

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

MATRICES. a m,1 a m,n A =

Linear Algebra. James Je Heon Kim

SOLVING LINEAR SYSTEMS

Elementary Row Operations on Matrices

Linear Systems of n equations for n unknowns

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Numerical Linear Algebra

Lecture Notes in Linear Algebra

Graduate Mathematical Economics Lecture 1

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Linear Algebra March 16, 2019

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

Linear Equations in Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

MTH 464: Computational Linear Algebra

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Matrix operations Linear Algebra with Computer Science Application

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Section 9.2: Matrices.. a m1 a m2 a mn

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Numerical Analysis Lecture Notes

II. Determinant Functions

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Next topics: Solving systems of linear equations

The Solution of Linear Systems AX = B

Notes on Determinants and Matrix Inverse

MATH 106 LINEAR ALGEBRA LECTURE NOTES

Phys 201. Matrices and Determinants

TOPIC III LINEAR ALGEBRA

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

Computational Methods. Systems of Linear Equations

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

Matrix decompositions

Digital Workbook for GRA 6035 Mathematics

Matrices. Chapter Definitions and Notations

Matrix Arithmetic. j=1

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

1.4 Gaussian Elimination Gaussian elimination: an algorithm for finding a (actually the ) reduced row echelon form of a matrix. A row echelon form

MODEL ANSWERS TO THE THIRD HOMEWORK

Chapter 1 Matrices and Systems of Equations

Lectures on Linear Algebra for IT

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Linear System of Equations

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Matrices and Linear Algebra

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Matrix Factorization and Analysis

Math 313 Chapter 1 Review

Linear Algebra Tutorial for Math3315/CSE3365 Daniel R. Reynolds

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Appendix C Vector and matrix algebra

Computational Linear Algebra

ECON 186 Class Notes: Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

MIDTERM 1 - SOLUTIONS

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Matrix decompositions

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Transcription:

CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to the system, if it is assumed that all calculations can be performed without round-o error e ects. We consider the role of finite-digit arithmetic error in the approiximation to the solution to the system and how to arrange the calculations to minimize its e ect. Note that we assume that we are working on a system with a unique solution.. Gaussian Elimination Two systems of equations are equivalent systems if they have the same solution set. The following three operations on a system of equations result in an equivalent system. Operations on Systems of Equations Yielding an Equivalent System () Equation E i can be multiplied by any nonzero constant, with the resulting equation used in place of E i. This operation is denoted ( E i )! (E i ). () Equation E j can be multiplied by any constant and added to equation E i, with the resulting equation used in place of E i. This operation is denoted (E i + E j )! (E i ). () Equations E i and E j can be transposed in order. This operation is denoted (E i ) $ (E j ).

. GAUSSIAN ELIMINATION 9 Example (Gaussian Elimination with Backward Substitution). Notice how the row operations on the matrices mirror the corresponding operations on the equations. Notice also that the coding can apply to either the equations or the matrix. >< >: () x + x + x = 9 () x + x x = 5 () x + x x = 9 9 55 9 >< >: >< >: () x + x x = 9 (E ) $ (E ) () x + x x = 5 () x + x + x = 9 (E ) $ (E ) () x + x 6x = (E )! (E ) () x + x x = 5 () x + x + x = 9 9 55 9 6 5 5 9 >< >: () x + x 6x = () x + 5x = (E E )! (E ) 6 0 5 5 () x + x = 5 (E E )! (E ) 0 5 >< >: () x + x 6x = () x + 5x = 6 0 5 5 () x = 9 (E + E )! (E ) 0 0 9 This system of equations is (upper) triangular or reduced, and can be solved by backward substitution. The solution found thus applies to all five equivalent systems.

0 6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS x = 9 x = x + 5 = x = x = x = x = x = Definition. A = 0 55 is a matrix, an array of numbers in rows and columns. The plural of matrix is matrices. The numbers are the entries of A. A is a matrix since it has rows and columns. An n n matrix (same number of rows as columns) is a square matrix. B = 6 05 is a square matrix. 0 An n m matrix A may be represented as a a a m A = [a ij ] = 6a a a m...... 5 a n a n a nm where a kl means the entry in row k, column l.

a a n. GAUSSIAN ELIMINATION The n matrix A = 6a. 5is a n-dimensional column vector. The m matrix A = a a a m is an m-dimensional row vector. Thus, ignoring the unneeded subscript, x = x 6x. 5 y = y y y m is a row vector. a a a m b If A = 6a a a m...... 5 and b = 6b. 5, then a n a n a nm b n x n is a column vector and A = [A, b] = a a a m b 6a a a m b....... 5 a n a n a nm b n is an augmented matrix, used to represent a system of n equations in m variables. In the future, we will often dismiss the practice of using the separators to denote the right-hand side column matrix. Note. () This method of Gaussian elimination with backward substitution can fail to give a unique solution, assuming as many equations as variables, if any a ii = 0 in the augmented matrix after it has been reduced. () The number of computations a ects computation time and round-o error in a machine with finite-digit arithmetic. The number of addition/subtractions

6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS in our method for n equations in n variables is n + n 5n 6 while the number of multiplications/divisions (which take more time) is n + n n. Both are O(n ). See the table on Page of the text. Maple. See linsys.mw and/or lynsys.pdf.

. PIVOTING STRATEGIES. Pivoting Strategies In systems with unique solutions, the a ii are the pivot elements. In 5 0 0 5, 0 5 the boxed 0 is a pivot element. But the reduced matrix can have no a ii = 0, so we swap rows and. It is often necessary to swap rows to reduce round-o error even when the pivot elements are not 0. Consider. a ii a il 6. a ki 5.. To get a ki = 0, we multiply row i by m i = a ki a ii and then add the product to row k. If a ii is much smaller than a ki, then m i is large and multiplying row i by m i magnifies round-o error in a il {z} approximate value and the other values in row i. = b il {z} exact value + il {z} error Also, for back substitution, x i = a P n i,n+ j=i+ a ij, a ii so if a ii is small, any round-o error in the numerator is magnified.

6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS Partial Pivoting or Maximal Column Pivoting When a ii is the pivot, swap rows so that the element of largest magnitude from a ii thru a ni is moved to a ii. But this is still sometimes inadequate. Scaled Partial Pivoting This method places in the pivot position a ii the element from a ii thru a ni that is largest relative to the entries in its row. The e ect of scaling is to ensure that the largest element in each row has a relative magnitude of before the comparison for row interchange is performed. () For each row k, define the scale factor s k = max applejapplen a kj. If some s k = 0, then there is no unique solution. () For a, find the least integer p such that a p s p a k = max applekapplen s k and then swap (E ) $ (E p ) (and also swap s $ s p ) if p 6=. () For a ii, i >, find the least integer p i such that a pi s p a ki = max iapplekapplen s k and then swap (E i ) $ (E p ) (and also swap s i $ s p ) if p 6= i. Note. The scale factors s, s,..., s n are computed only once at the start of the procedure and must be swapped along with the row swaps.

. LINEAR ALGEBRA AND MATRIX INVERSION 5 Maximal (or Total or Full) Pivoting Do row and column swapping to bring the element of largest magnitude below and to the right of a ii to a ii, provided its magnitude is larger than that of a ii. n n Gaussian elimination requires O multiplications/divisions and O additions/subtractions. Partial Pivoting is about the same. Scaled partial pivoting adds n(n + ) n(n ) comparisons and so does not significantly add to computation time. divisions, Complete pivoting adds n(n )(n+5) 6 comparisons, which approximately doubles the amount of addition/subtraction time over Gausian elimination. Maple. See pivoting.mw and/or pivoting.pdf.. Linear Algebra and Matrix Inversion Matrices A and B are equal if both are n m and a ij = b ij for i =,..., n and j =,..., m. The sum A + B of n m matrices A = [a ij ] and B = [b ij ] is the n m matrix obtained by adding corresponding entries. A + B = [a ij + b ij ], for i =,..., n and j =,..., m. The scalar multiplication of the real (or complex) scalar A = [a ij ] is the n m matrix A = [ a ij ]. 0 n m is the n m zero matrix where all entrie are 0. and the n m matrix If A = [a ij ] is an n m matrix, the A = [ a ij ] is also an n m matrix.

6 6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS Theorem. The set of all n m matrices with real entries is a vector space over the field of real numbers. If A, B, C are n m matrices and, µ R: (a) A + B = B + A (b) (A + B) + C = A + (B + C) (c) A + 0 = 0 + A = A (d) A + ( A) = A + A = 0 (e) (A + B) = A + B (f) ( + µ)a = A + µa (g) (µa) = ( µ)a (h) A = A The matrix product AB where A is an n m matrix and B is an m p matrix is the n p matrix C where a a m b a p c c p..... 5..... 5 =..... 5 a n a nm b m b mp c n c np with c ij = a i a i a im 6 b j b j. b mj m 5 = X k= a ik b kj, i.e., to get the entry in row i, column j of the product, just multiply each of the numbers in row i of A with the corresponding number in column j of B, and then add the m products. Note. {z} A {z} B = {z} C. The m s must match, and the outside numbers n m m p n p give the new dimensions.

r r n. LINEAR ALGEBRA AND MATRIX INVERSION If A = 6r. 5 with the r i m row vectors and B = c c c p with the c j m column vectors, then c c c p r c r c r c p C = 6c c c p...... 5 = 6r c r c r c p...... 5. c n c n c np r n c r n c r n c p Example. apple apple apple 0 = and apple apple 0 apple 0 =, so matrix multiplication is not commutative. 5 5 = 0 5 5 0 {z } 6 {z } 6 {z } These matrices cannot even be multiplied in the other direction due to dimension mismatch. A square n n matrix has the same number of rows as columns. A diagonal matrix is a square matrix D = [d ij ] with d ij = 0 for i 6= j. Example. D = 0 0 0 60 0 0 0 0 05 0 0 0

6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS An n n identity matrix is a square matrix I n = [ ij ] where ( for i = j ij = 0 for i 6= j. Example. I = 0 0 0 60 0 0 0 0 05 0 0 0 A square matrix is upper-triangular if all entries below the diagonal are 0 and lower-triangular if all entries above the diagonal are 0. Theorem. Let A n m, B m k, C k p, D m k be matrices an (a) A(BC) = (AB)C (b) A(B + D) = AB + AD (c) I m B = B = BI k (d) (AB) = ( A)B = A( B) R. Then An n n matrix A is invertible or nonsingular if there exists an n n matrix A such that AA = A A = I n. We say A is the inverse of A. A matrix without an inverse is singular or noninvertible. Theorem. For any n n nonsingular matrix A: (a) A is unique (b) A is nonsingular and (A ) = A (c) If B is also an n n nonsingular matrix, (AB) = B A

. LINEAR ALGEBRA AND MATRIX INVERSION 9 The system of linear equations corresponds to a x + + a n x n = b...... a n x + + a nn x n = b n a a n x b..... 5. 5 =. 5 or Ax = b. a n a nn x n b n Ax = b has a unique solution () A is invertible. In that case, A (Ax) = A b =) (A A)x = A b =) I n x = A b =) x = A b Problem (Page 6 #b(ii)). Find the inverse of A = find B such that 0 5 b b b b b b 5 = b b b = b + b b + b b + b b + b b b + b b b + b b b + b + b b + b + b b + b + b 5 = 0 5, i.e., 0 0 0 05. 0 0 Matching up columns, we have systems of linear equations, all with the same coe cient matrix. As a result, we can use Gaussian elimination on a larger augmented matrix to solve all systems at once:

50 6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS 0 0 0 0 05 0 0 0 0 0 0 05 E E! E 0 5 0 E E! E 0 0 0 0 05 5 0 0 5 E E! E Now, using back substitution. b = b b = 5 b = b = b = 5 b = = b + 5 = b = 0 b = 5 b = b = b = 5 b = b = b + 0 = b = 0 b = 0 b = b = b = Thus B = Checking, 0 5 and 5 5 5 5 5. 5 = 0 0 0 05 0 0

5. LINEAR ALGEBRA AND MATRIX INVERSION 5 5 5 0 5 = 0 0 0 05. 0 0 The transpose of A = [a ij ], an n m matrix, is the m n matrix A t = [a ji ] obtained from A by interchanging the rows an columns of A. A square matrix is symmetric if A = A t. Theorem. The following hold whenever the operation is possible: (a) (A t ) t = A (b) (A + B) t = A t + B t (c) (AB) t = B t A t (d) If A exists, (A ) t = (A t ) Definition (Determinant of a matrix A, denoted det A or A ). (a) If A = [a], det A = a. (b) If A is n n, the minor M ij is the determinant of the (n ) (n ) submatrix of A obtained by deleting row i and column j from A. (c) The cofactor A ij associated with M ij is A ij = ( ) i+j M ij. (d) For an n n matrix A, n >, nx nx det A = a ij A ij = ( ) i+j a ij M ij for any i =,..., n or det A = j= nx a ij A ij = i= j= nx ( ) i+j a ij M ij for any j =,..., n. i=

5 6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS Now Example. det 6 0 0 () det 0 5 = det 0 n o () det[] () det[] 5 + () det 5 + () det 0 5. apple 5 = ( ) det apple + ( ) det = n o () det[] () det[] ( ) ( ) = ( 5) ( 5) = 0. We will use a special shortcut for the other two determinants. We rewrite the first two columns to the right of the matrix as an aid. det 5 = & z } { ( ) + & z } { ( ) + & z } { ( ). z } { ( ). z } { ( ) =. z } { ( ) = + + 6 6 = 5. Once the pattern is known, this can be simplified further by thinking of rolling up the matrix into a vertical cylinder. We do this for the third determinant. det 0 5 = + 0 ( 6) 0 ( ) =. Putting everything together,

. LINEAR ALGEBRA AND MATRIX INVERSION 5 0 det 6 0 5 = 0 + 5 + ( ) = apple a b Note. In general, det = ad bc. c d Theorem. Suppose A is an n n matrix: (a) If any row or column of A has only 0 entries, then det A = 0. (b) If e A is obtained from A by the operation (E i ) $ (E k ), then det e A = det A. (c) If A has two rows or two columns the same, then det A = 0. (d) If e A is obtained from A by the operation ( E i )! (E i ), then det e A = det A. (e) If e A is obtained from A by the operation (E i + det e A = det A. E k )! (E i ), then (f) If B is also an n n matrix, then (g) det A t = det A. det(ab) = det A det B. (h) If A exists, then det A = det A. (i) If A is an upper triangular, lower triangular, or diagonal matrix, then ny det A = a ii = a a a nn. i=

5 6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS Theorem. The following statements are equivalent for an n n matrix A: (a) The equation Ax = 0 has the unique solution x = 0. (b) The system Ax = b has a unique solution for any n-dimensional column vector b. (c) The matrix A is nonsingular; that is A exists. (d) det A 6= 0. (e) Gaussian elimination with row interchanges can be performed on the system Ax = b for any n-dimensional column vector b. Maple. See inverses.mw and/or inverses.pdf. 5. Matrix Factorization We want to be able to factor a matrix A as A = LU where L is lower triangular and U is upper triangular. This can be done if Ax = b can be solved by Gaussian elimination without row interchanges. We will use Maple to do the factoring. Then Ax = b becomes LUx = b. Let y = Ux =) Ly = b. Solve for y by forward substitution. Then solve Ux = y for x by backward substitution.

5. MATRIX FACTORIZATION 55 Example. 0 0 05 0 5 x x 5 = 5 0 0 {z } 0 {z x } {z } {z } L U x b 0 0 05 y y 5 = y 5 =) y + y 5 = 5 0 y {z } {z } {z } y + y {z } {z } L y b Ly b Forward substitution: y = + y = =) y = 5 + y = =) y = 0 5 x x 5 = 55 =) x + x x x + x 5 = 55 =) 0 0 {z x } {z } {z } x {z } {z } U x y Ux y Backward substitution: Thus x = =) x = x + = 5 =) x = x + 9 = =) x = x = 5.

56 6. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS Why do this factorization? n Ax = b requires O arithmetic operations. Ly = b and Ux = y each require O(n ) arithmetic operations for a total of O(n ). In systems greater than 00 00, this reduces the amount of calculation by more than 9%. n But: LU factorization requires O operations. Still, once factored, LU can be used with any b. What if row interchanges are necessary? We use an n n permutation matrix P formed by rearranging the rows of I n. Suppose A = 5 65 and P = 0 0 0 0 5. 9 0 0 P A = 0 0 0 0 5 5 65 = 5 6 95 0 0 9 P has rows of I n in order,,, so P A has rows of A in order,,. AP = 5 65 0 0 0 0 5 = 6 55 9 0 0 9 P has columns of I n in order,,, so AP has columns of A in order,,. Method Find a permutation matrix P such that P Ax = b can be solved without row interchanges. Then, noting P = P t, P A = LU =) A = P t LU =) P t LUx = b =) LUx = P b = c. Now solve as above. Maple. See lu.mw and/or lu.pdf.