LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

Similar documents
Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

MATH 3511 Lecture 1. Solving Linear Systems 1

2.1 Gaussian Elimination

Matrix decompositions

Gaussian Elimination and Back Substitution

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Matrix decompositions

CHAPTER 6. Direct Methods for Solving Linear Systems

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Solving Linear Systems Using Gaussian Elimination. How can we solve

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 8 Gauss Elimination. Gab-Byung Chae

The Solution of Linear Systems AX = B

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

1.Chapter Objectives

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Linear Algebraic Equations

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Introduction to Mathematical Programming

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Numerical Linear Algebra

9. Numerical linear algebra background

BTCS Solution to the Heat Equation

MA2501 Numerical Methods Spring 2015

Linear Systems of n equations for n unknowns

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

OUTLINE. Introduction. Notation. Special matrices. Gaussian Elimination. Vector and matrix norms. Finite precision arithmetic. Factorizations. Pivotin

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

The purpose of computing is insight, not numbers. Richard Wesley Hamming

Solution of Linear Systems

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Matrix Factorization

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University

Scientific Computing

Engineering Computation

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Computational Methods. Systems of Linear Equations

Numerical Methods I: Numerical linear algebra

1 GSW Sets of Systems

LINEAR SYSTEMS (11) Intensive Computation

9. Numerical linear algebra background

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

Linear Equations and Matrix

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Solving Linear Systems of Equations

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Process Model Formulation and Solution, 3E4

Next topics: Solving systems of linear equations

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

1.5 Gaussian Elimination With Partial Pivoting.

Solving Linear Systems 1 Review We start with a review of vectors and matrices and matrix-matrix and matrixvector multiplication and other matrix and

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

Systems of Linear Equations

AMS526: Numerical Analysis I (Numerical Linear Algebra)

SOLVING LINEAR SYSTEMS

Computation of the mtx-vec product based on storage scheme on vector CPUs

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

CSE 160 Lecture 13. Numerical Linear Algebra

Computational Linear Algebra

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

1 Problem 1 Solution. 1.1 Common Mistakes. In order to show that the matrix L k is the inverse of the matrix M k, we need to show that

The following steps will help you to record your work and save and submit it successfully.

MTH 215: Introduction to Linear Algebra

Solution of Linear systems

Section Partitioned Matrices and LU Factorization

MATH2210 Notebook 2 Spring 2018

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Numerical Analysis FMN011

Chapter 9: Gaussian Elimination

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Linear System of Equations

Linear Algebraic Equations

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Solving Linear Systems of Equations

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

Solving linear systems (6 lectures)

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

MTH 464: Computational Linear Algebra

JACOBI S ITERATION METHOD

Linear Algebra and Matrix Inversion

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

Transcription:

.. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite number of operations { Iterative methods nd a solution as the number of operations t to innity Iteration is superior for large sparse systems Gaussian elimination is a factorization of A as A = LU (a) { L is unit lower triangular and U is upper triangular 1

LU Factorization a 11 a 1 a 1n A = a 1 a a n...... 75 (b) a n1 a n a nn L = 1 0 0 l 1 1 0 l 1 l 0...... ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...... 0 0 unn 75 (c) Once L and U have been determined { Let Ax = LUx = b Ux = y (a) { Then Ly = b (b) { Solve (b) for y by forward substitution and (a) for x by backward substitution

LU Factorization Classical Gaussian elimination { Use row saxpys to reduce A to upper triangular form U { Show that these row operations are the product of A and lower triangular matrices L k, k = 1 : n ; 1, i.e., L n;1 L L 1 A = U (4a) { Thus L = L ;1 1 L;1 L ;1 n;1 (4b) { cf. Trefethen and Bau (1997), Lecture 0 We use a direct factorization { Multiply the rst row of L by U to obtain the rst row of A a 1j = (1)u 1j j = 1 : n Thus u 1j = a 1j j = 1 : n { Multiply L by the rst column of U to obtain the rst column of A li1 u 11 = ai1 i = : n or l i1 = a i1 =u 11 i = : n Redundant information with i = 1 is not written The algorithm fails if u 11 = a 11 = 0

LU Factorization Continuing { Multiply the second row of L by U to obtain the second row of A l 1 u 1j + u j = a j j = : n Thus, u j = a j ; l 1 u 1j { The general procedure is uij = aij ; Xi;1 j = : n likukj j = i : n (5a) lji = 1 k=1 Xi;1 (aji ; ljk u ki) u ii k=1 j =i +1: n i =1 : n ; 1 (5b) The sums are zero when the lower limit exceeds the upper one The procedure fails if uii = 0 for some i The matrices L and U can be stored in the locations used for the same elements of A 4

LU Factorization function A = lufactor(a) % lufactor: Compute the LU decomposition of an n-by-n % matrix A by direct factorization. On return, L is % stored in the lower triangular part of A and U is % stored in the upper triangular part. [n n] = size(a) % Loop over the rows for i = 1: n - 1 % Calculate the i th column of L for j = i + 1: n for k = 1: i - 1 A(j,i) = A(j,i) - A(j,k)*A(k,i) A(j,i) = A(j,i)/A(i,i) % Calculate the (i + 1) th row of U for j = i+1: n for k = 1: i A(i+1,j) = A(i+1,j) - A(i+1,k)*A(k,j) 5

LU Factorization Observe: i. The rst row of U needs no explicit calculation ii. The 1s on the diagonal of L are not stored iii. MATLAB Loops do not execute when the lower index exceeds the upper one iv. Classical Gausian elimination corresponds to a dierent loop ordering { cf. Trefethen and Bau (1997), Lecture 0 Example 1. Factor A = 1 0 ;4 ;1 0 6 ; ;5 0 ; ; 4 4 75 { i = 1: Calculate the rst column of L and the second row of U 1 0 ;4 ;1 6 ;75 A ) ; ;5 0 ; ; 4 4 6

LU Factorization { i = : Calculate the second column of L and the third row of U 1 0 ;4 ;1 6 ;75 A ) ;4 ;1 4 ; 1= 4 4 { i = : Calculate the third column of L and the fourth row of U 1 0 ;4 ;1 6 ;75 A ) ;4 ;1 4 ; 1= ;1 1 The L and U factors L = 1 0 0 0 ;1 1 0 0 ;4 1 0 ; 1= ;1 1 75 U = 1 0 ;4 0 6 ; 0 0 ;1 4 0 0 0 1 75 7

Elementary transformations L 1 = 1 0 0 0 1 1 0 0 ; 0 1 0 0 0 1 L Exposed L = 75 L = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 0 0 4 1 0 0 ;1= 0 1 { L k is the identity matrix with its k th column replaced by the k th column of L with the negative of the subdiagonal elements L ;1 = L L L 1 75 1 0 0 0 1 1 0 0 1 4 1 0 :5 :5 1 1 75 75 8

Forward and Backward Substitution Solution of (b) by forward substitution y i = b i ; Xi;1 j=1 l ij y j (6a) function y = forward(l, b) % forward: Solution of a n-by-n lower triangular system % Ly = b by forward substitution. [n n] = size(l) y(1) = b(1) for i = :n y(i) = b(i) - dot(l(i,1:i-1)', y(1:i-1)) Solve (a) by backward substitution xi = 1 nx [yi ; uij xj] u ii j=i+1 (6b) function x = backward(u, y) % backward: Solution of a n-by-n upper triangular system % Ux = y by backward substitution. [n n] = size(u) x(n) = y(n)/u(n,n) for i = n - 1:-1:1 x(i) = (y(i) - dot(u(i,i+1:n)', x(i+1:n)))/u(i,i) 9

Forward and Backward Substitution Example. Solve the linear system Ax = b for x when A is as given in Example 1 and b = [;1 7 ;4 ] T { Using the forward substitution algorithm y = [;1 6 1] T { Using backward substitution x = [1 1 1 1] T 10

Factorization: (cf. p. 5) { Inner column loop: Operation Count i ; 1 multiplications and subtractions 1 division { Inner row loop: i multiplications and subtractions { The summations on j give (i ; 1)(n ; i) multiplications and subtractions n ; i divisions { The summations on i give Multiplications and subtractions: Xn;1 i=1 Divisions: (i ; 1)(n ; i) = Xn;1 i=1 (n ; 1) = n(n ; 1)(n ; 1) 6 n(n ; 1) Formulas (1..1, ) were used to obtain the result { A oating point operation (FLOP) is any+ ; = p For large n the factorization has about n = FLOPs 11

Operation Count Forward Substitution: (cf. p. 9) { i;1 multiplications and additions/subtractions in the dot product Total multiplications and additions/subtractions: nx i= Backward Substitution: (i ; 1) = n(n ; 1) { n(n;1)=multiplications and additions/subtractions and n divisions Forward and Backward Substitution: { For large n, there are about n FLOPs Factorization dominates forward and backward substitution for large n 1