Matrix Factorization Reading: Lay 2.5

Similar documents
Math 344 Lecture # Linear Systems

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Independence Reading: Lay 1.7

MTH 215: Introduction to Linear Algebra

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

Matrix Inverses. November 19, 2014

Linear Equations in Linear Algebra

Matrices, Row Reduction of Matrices

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Next topics: Solving systems of linear equations

22A-2 SUMMER 2014 LECTURE 5

Lecture 9: Elementary Matrices

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Exercise Sketch these lines and find their intersection.

SOLVING Ax = b: GAUSS-JORDAN ELIMINATION [LARSON 1.2]

Row Reduction and Echelon Forms

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Homework Set #1 Solutions

5x 2 = 10. x 1 + 7(2) = 4. x 1 3x 2 = 4. 3x 1 + 9x 2 = 8

MATH 2030: ASSIGNMENT 4 SOLUTIONS

2.1 Gaussian Elimination

MODEL ANSWERS TO THE THIRD HOMEWORK

Math 22AL Lab #4. 1 Objectives. 2 Header. 0.1 Notes

MATH 310, REVIEW SHEET 2

MATLAB Project: LU Factorization

Lecture 1 Systems of Linear Equations and Matrices

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

3.4 Elementary Matrices and Matrix Inverse

Numerical Linear Algebra

Section 5.6. LU and LDU Factorizations

Section Partitioned Matrices and LU Factorization

E k E k 1 E 2 E 1 A = B

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

1 GSW Sets of Systems

1 Last time: inverses

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Problems for M 10/12:

Math 1021, Linear Algebra 1. Section: A at 10am, B at 2:30pm

MAT 343 Laboratory 3 The LU factorization

Connor Ahlbach Math 308 Notes

Gaussian Elimination and Back Substitution

Matrix decompositions

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Solving Systems of Linear Equations

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Math 221

MA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4

Math 3C Lecture 20. John Douglas Moore

Solution Set 4, Fall 12

4 Elementary matrices, continued

Chapter 1. Vectors, Matrices, and Linear Spaces

Math 1553 Introduction to Linear Algebra

Solutions to Exam I MATH 304, section 6

Section Gaussian Elimination

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Introduction to Determinants

Math 220: Summer Midterm 1 Questions

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

This can be accomplished by left matrix multiplication as follows: I

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam

Matrix decompositions

Direct Methods for Solving Linear Systems. Matrix Factorization

Chapter 4. Solving Systems of Equations. Chapter 4

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Pre-Calculus I. For example, the system. x y 2 z. may be represented by the augmented matrix

Introduction to Systems of Equations

MIDTERM 1 - SOLUTIONS

13. Systems of Linear Equations 1

Linear Algebraic Equations

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider

MTH 464: Computational Linear Algebra

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

MAT 1302B Mathematical Methods II

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

March 19 - Solving Linear Systems

Notes on Row Reduction

Solving Consistent Linear Systems

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Solving Ax = b w/ different b s: LU-Factorization

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

Matrices. Ellen Kulinsky

Computational Methods. Systems of Linear Equations

Math 54 HW 4 solutions

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Row Reduced Echelon Form

4 Elementary matrices, continued

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Lecture 2 Systems of Linear Equations and Matrices, Continued

MATH 310, REVIEW SHEET

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Elementary Linear Algebra

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Matrices. Ellen Kulinsky

Chapter 2: Matrix Algebra

MATH 2360 REVIEW PROBLEMS

Transcription:

Matrix Factorization Reading: Lay 2.5 October, 20 You have seen that if we know the inverse A of a matrix A, we can easily solve the equation Ax = b. Solving a large number of equations Ax = b, Ax 2 = b 2,..., Ax k = b k, where A is the same in each equation, is easy if we know A. Indeed, rather than writing down k different augmented matrices and then computing the RREF for each (this can be time-consuming), we can just multiply each b i by A to get the solutions. In a real-life situation, where the matrices that come up are often extremely complicated, computing an inverse is not always the most computationally efficient method, because computing the inverse of a matrix can take a fair amount of time. Today, we will discuss an alternative approach which is generally faster. LU factorization We call a matrix whose entries below the main diagonal are all zero an upper triangular matrix, and make a similar definition for lower triangular. The idea of the LU factorization is to write a general matrix as a product of a lower triangular and an upper triangular matrix. So let A be an m n matrix, and assume we can write A = LU, where L is m m, lower triangular, and has s on the diagonal; and where U is an m n echelon form of A. For instance, one possible set of shapes is 0 0 L = 0, U = 0 0. 0 0 0 0

Why would such a decomposition be important? Say we want to solve Ax = b. Then we can rewrite this as LUx = b. We can solve this equation by solving instead the pair of equations Ux = y Ly = b. You can think of this as being a substitution trick of the form you may have seen in, for instance, calculus class. The reason that a pair of equations like this will generally be easier to solve has to do with the particular structure of L and U. The plan of attack is as follows: first solve Ly = b for y, then solve Ux = y. Example.. Let b = 2. 2 Say A is the matrix 4 0 0 4 A = 4 2 2 = 0 0 = LU 4 0 2 0 0 2 To begin solving Ly = b, we write the augmented matrix [ ] 0 0 L b = 0 2 () 2 One very fast way to solve this system above is to compute the RREF. This is made very easy by the structure of L (try it yourself and you will see what I mean), and gives 0 0 0 0. 0 0 2 Another way to solve the system with augmented matrix () is to note that we can just read off the value y = from the empty first row, followed by y 2 = 2 x = 2

from the second, and y = 2 y 2 y = 2 from the third. As you can see, the shape of L makes it very easy to deal with. Solving Ux = y is similar. The augmented matrix is [ ] 4 L b = 0. (2) 0 0 2 2 The RREF of (2) is 0 0 0 0 0. 0 0 Alternatively, we could have solved the system with augmented matrix (2) using back-substitution as before: use the bottom row to see that 2x = 2, so x =, and gone up level-by-level. Thus, we see that the original problem is solved by x = 0. There is a good example in Lay with a bigger matrix A, as well, which also demonstrates the method. 2 Finding L and U Now that you have seen how to solve linear systems given the LU factorization, I should show you an algorithm to compute L and U. It is a sad fact of life that an LU factorization of A is not possible for all matrices A, but we will see that a particular condition on A is enough to guarantee that a factorization exists. Let A be a matrix which can be reduced to an echelon form U using only operations that add a multiple of one row to another lower row that is, a row below it. Then we have E E 2... E p A = U

for some elementary matrices E,..., E p, which are lower triangular. It can be proved that the inverse of a lower triangular matrix is always lower triangular, and that the product of two lower triangular matrices is lower triangular. Thus, if we rewrite the above as we see that L = E p A = Ep... E U,... E is lower triangular, and we have our desired LU decomposition. The above also tells us how to compute U, but what about L? Note that by our description, we have E... E p L = I, so the same row operations that reduce A to U also reduce L to I. This gives rise to our two-step algorithm for computing the LU factorization:. Reduce A to a row echelon form U using only operations of the type mentioned above. 2. Place entries in L so that the same sequence of operations reduces L to I. Example 2.. Consider the A from our previous example, 4 A = 4 2 2. 4 0 2 To compute a row echelon form of A, we add times the first row of A to the second row of A, then add times the first row of A to the third row of A. These two operations are of the form that we want, and the computation proceeds next by adding (-) times row two to row three: 4 4 A = 4 2 2 0 4 0 2 4 0 2 4 0 0 4 0 = U. 0 0 2 4

Now we want L to be a matrix such that these row operations reduce L to I. So first, we want the first column of L to have a in its topmost entry and then be such that when we do the same operations we would use to clear out the entries below the pivot of A, we get zeros in L too. After a moment s reflection, you can see that this is satisfied if we take the first column of L to be the first column of A divided by the value at the first pivot position (4), so we proceed: L = We want to have 0 in the topmost entry of the second column, in the middle entry, and we want the bottom entry to be something that gets reduced to zero after performing the same row operations as on A. The way that we do this is look at the part in the computation of U after we have cleared the first column of A; i.e., the matrix 4 0 () 0 From this point, the only operations we will do on A that will affect the second column of L are the ones which clear below the pivot in the center of the above matrix. So again, the way we find the entries of the second column of L is to start with a 0 up top, and then the rest of the column is made up of the part of the middle column of () lying below the pivot, all divided by the value at the pivot (-). Repeating this procedure gives 0 0 L = 0. Applications to circuits Here I will talk about applications of the decomposition to circuits. I will follow closely the presentation in Lay, so I will not comment further except to say that you should read this section. 5