MATLAB Project: LU Factorization

Similar documents
Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Matrix decompositions

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

MAT 343 Laboratory 3 The LU factorization

Matrix Factorization Reading: Lay 2.5

Matrix decompositions

1.Chapter Objectives

Numerical Linear Algebra

Linear Systems of n equations for n unknowns

The following steps will help you to record your work and save and submit it successfully.

Numerical Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

Math 22AL Lab #4. 1 Objectives. 2 Header. 0.1 Notes

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

22A-2 SUMMER 2014 LECTURE 5

Solving Consistent Linear Systems

The Solution of Linear Systems AX = B

Math 3C Lecture 20. John Douglas Moore

Solution Set 3, Fall '12

Section 5.6. LU and LDU Factorizations

MTH 215: Introduction to Linear Algebra

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Name: MATH 3195 :: Fall 2011 :: Exam 2. No document, no calculator, 1h00. Explanations and justifications are expected for full credit.

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

MTH 464: Computational Linear Algebra

MIDTERM 1 - SOLUTIONS

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

1.5 Gaussian Elimination With Partial Pivoting.

Lecture 9: Elementary Matrices

CSC 336F Assignment #3 Due: 24 November 2017.

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

6 Linear Systems of Equations

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

THE NULLSPACE OF A: SOLVING AX = 0 3.2

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Chapter 4. Solving Systems of Equations. Chapter 4

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

MATH 3511 Lecture 1. Solving Linear Systems 1

Section Matrices and Systems of Linear Eqns.

Gaussian Elimination and Back Substitution

Scientific Computing: Dense Linear Systems

lecture 2 and 3: algorithms for linear algebra

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

solve EB œ,, we can row reduce the augmented matrix

Introduction to Mathematical Programming

Math 344 Lecture # Linear Systems

1 GSW Sets of Systems

Linear Systems. Math A Bianca Santoro. September 23, 2016

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

4 Elementary matrices, continued

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

Math 313 Chapter 1 Review

Methods for Solving Linear Systems Part 2

Linear Independence x

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

lecture 3 and 4: algorithms for linear algebra

Determine whether the following system has a trivial solution or non-trivial solution:

2.1 Gaussian Elimination

Linear Equations and Matrix

5 Solving Systems of Linear Equations

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Scientific Computing

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Solving Dense Linear Systems I

7.6 The Inverse of a Square Matrix

Section Gaussian Elimination

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Topic 15 Notes Jeremy Orloff

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Solving Ax = b w/ different b s: LU-Factorization

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

1.4 Gaussian Elimination Gaussian elimination: an algorithm for finding a (actually the ) reduced row echelon form of a matrix. A row echelon form

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Numerical Linear Algebra

solve EB œ,, we can row reduce the augmented matrix

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Chapter 3 - From Gaussian Elimination to LU Factorization

Matrices, Row Reduction of Matrices

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Linear Algebra I Lecture 8

MA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Problem Set # 1 Solution, 18.06

MATRICES. a m,1 a m,n A =

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

SOLVING LINEAR SYSTEMS

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Transcription:

Name Purpose: To practice Lay's LU Factorization Algorithm and see how it is related to MATLAB's lu function. Prerequisite: Section 2.5 MATLAB functions used: *, lu; and ludat and gauss from Laydata4 Toolbox Background. In Section 2.5, read about Lay's algorithm for calculating an LU factorization. Carefully study Example 2. It is imperative you understand the algorithm for calculating the matrix L before starting. In this project you will perform his algorithm on the matrices below, and see the connection between his algorithm and the one used by MATLAB's lu function. 5 3 4 A = 15 1 2 2 6 1 4 9 4 E = 2 3 0 4 7 8 15 1 2 B = 5 3 4 1 3 0 F = 4 4 8 1 2 3 1 3 5 3 0 0 2 3 1 1 C = 0 10 15 5 5 0 2 3 1 1 1 1 2 2 1 2 4 2 4 D = 6 9 3 7 1 4 0 8 To begin, type ludat to get the matrices above. For each matrix, use gauss to reduce the matrix to echelon form. The command gauss does only "add a multiple of one row to another" type operations. It does no scaling or row exchanges. When used as shown below, gauss zeros out the entries directly below each successive pivot position. Type help gauss to learn more about this function and its uses. 1. (MATLAB) For each matrix above, use gauss and the algorithm in Section 2.5 to calculate an LU factorization. Record the matrix gotten from each gauss step. Inspect those to write L. Finally, verify that LU does equal the original matrix (where U is the final matrix in your reduction). (a) Here is the solution for A. We store each intermediate matrix as U, but the final U is what we really want: U = A (copy A so you can keep the original matrix and work with the copy U = gauss(u,1) (pivot on the first nonzero entry in row one) U = gauss(u,2) (pivot on the first nonzero entry in row two) The matrices produced by the commands above: Inspecting the matrices on the left gives L: 5 3 4 5 3 4 5 3 4 1 0 0 A = ~ 0 2 1 ~ 0 2 1 L = 2 1 0 15 1 2 0 10 14 0 0 9 3 5 1 The following lines will store L and allow you to verify that LU does look like A: L = [1 0 0;-2 1 0;-3-5 1] L*U, A

Page 2 of 6 (b) Modify the commands above for the other matrices, and record the same type of information for each: 15 1 2 B = ~ L= 5 3 4 (c) After two gauss steps on C you will see some zero rows. This simply means all multipliers are zero after that, so put zeros in the positions that remain unfilled in L. Remember that L should have 1's on its diagonal. 1 3 5 3 0 0 2 3 1 1 C = 0 10 15 5 5~ L= 0 2 3 1 1 1 1 2 2 1

Page 3 of 6 2 4 2 4 (d) D = 6 9 3 7 1 4 0 8 L= (e) 2 6 1 4 9 4 E = ~ L= 2 3 0 4 7 8 (f) 1 3 0 F = 4 4 8 ~ L= 1 2 3

Page 4 of 6 1 2. (hand) Let A be the matrix from part 1 of this project and let b = 2. Solve Ax = b with the method 3 given in the beginning of section 2.5 of the text. The idea is that the following equations are equivalent: A x= b LU x= b L y = band y = Ux. Thus, if you first solve L y = bfor y and then U x= yfor x, you will have the solution to Ax = b. Do the calculations by hand, and show work. Step 1. Solve L y = bfor y by forward substitution: Step 2. Using the vector y from Step 1, solve U x= yfor x by back substitution: Background on MATLAB's lu function. There are two ways to use lu, and we will illustrate these with the matrix F above. Either way the result will not be an LU Factorization of the original matrix but rather of a different matrix PF, where P is a permutation matrix. These matters are discussed a little more in the Remarks at the bottom of page 6. If you type [L U P] = lu(f), this will not produce the factorization you got in 1(f). Instead, 4 4 8 0 1 0 lu will first create PF = 1 3 0, where P = 1 0 0. It will then factor PF, obtaining 1 2 3 0 0 1 1 0 0 4 4 8 L =.25 1 0 and U = 0 2 2. The product LU will equal PF, not F..25.5 1 0 0 2

Page 5 of 6 Try this for yourself: type [L U P]=lu(F) to see these matrices, and then type P*F,L*U to verify for yourself that PF = LU is true. Compare the L and U obtained here with those you got in 1(f). If instead you type [L1 U1] = lu(f), U1 will be the same U as before but the matrix L1 will be T P L. This time L 1 is not lower triangular, but the product LU 1 1 will equal the original matrix F. Try this for yourself: type [L1 U1] = lu(f) and observe that L 1 equals and verify that LU 1 1 does equal F. T P L, not L. Then type L1*U1 3. Use the lu function as just described with the matrices A, B, and C discussed earlier, and record results. If necessary, recall these matrices by typing ludat. We do the first one as an example: 5 3 4 (a) For A =, first type [L U P] = lu(a), L*U, P*A and record the results: 15 1 2 1 0 0 15 1 2 0 0 1 15 1 2 L =.6667 1 0, U = 0 8.6667 10.3333, P = 0 1 0, LU = PA =.3333.3846 1 0 0.6923 1 0 0 5 3 4 Type [L1 U1] = lu(a), L1*U1,A and record the results:.3333.3846 1 L1 =.6667 1 0 1 0 0, U1 = U, LU 1 1 = A. (b) 15 1 2 B = 5 3 4

Page 6 of 6 (c) 1 3 5 3 0 0 2 3 1 1 C = 0 10 15 5 5 0 2 3 1 1 1 1 2 2 1 Remarks. A permutation matrix P is one obtained by rearranging the rows of an identity matrix. The effect of calculating PA is to produce that same rearrangement of the rows of A. The algorithm used in lu is called "partial pivoting," and this is what causes the creation of PA. Using partial pivoting typically improves the accuracy of row operations, so all professional software for solving linear systems, like MATLAB s lu and backslash functions, does partial pivoting.