Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Similar documents
MATH 3511 Lecture 1. Solving Linear Systems 1

Matrix decompositions

Matrix decompositions

Consider the following example of a linear system:

Introduction to Mathematical Programming

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

SOLVING LINEAR SYSTEMS

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

2.1 Gaussian Elimination

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Direct Methods for Solving Linear Systems. Matrix Factorization

Numerical Linear Algebra

Numerical Analysis Fall. Gauss Elimination

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Section Matrices and Systems of Linear Eqns.

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Gaussian Elimination and Back Substitution

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

1.Chapter Objectives

Linear Systems of n equations for n unknowns

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Maths for Signals and Systems Linear Algebra for Engineering Applications

The Solution of Linear Systems AX = B

AMS526: Numerical Analysis I (Numerical Linear Algebra)

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

MATRICES. a m,1 a m,n A =

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Next topics: Solving systems of linear equations

Engineering Computation

INVERSE OF A MATRIX [2.2]

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topic 15 Notes Jeremy Orloff

MAT 343 Laboratory 3 The LU factorization

A Review of Matrix Analysis

Numerical Analysis Lecture Notes

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Linear Equations and Matrix

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Linear Algebraic Equations

MA2501 Numerical Methods Spring 2015

Scientific Computing

CPE 310: Numerical Analysis for Engineers

Review Questions REVIEW QUESTIONS 71

Numerical Methods I: Numerical linear algebra

Chapter 9: Gaussian Elimination

Applied Linear Algebra in Geoscience Using MATLAB

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

MTH 464: Computational Linear Algebra

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Numerical Linear Algebra

Linear System of Equations

Linear Analysis Lecture 16

CS 323: Numerical Analysis and Computing

22A-2 SUMMER 2014 LECTURE 5

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Elementary maths for GMT

MIDTERM 1 - SOLUTIONS

Matrices and systems of linear equations

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

1 - Systems of Linear Equations

Roundoff Analysis of Gaussian Elimination

Numerical Analysis FMN011

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

MATLAB Project: LU Factorization

Math 313 Chapter 1 Review

Practical Linear Algebra: A Geometry Toolbox

Linear Algebra Math 221

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Solution of Linear systems

Lecture 9: Elementary Matrices

CHAPTER 6. Direct Methods for Solving Linear Systems

MODEL ANSWERS TO THE THIRD HOMEWORK

Solving Dense Linear Systems I

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

x 2 b 1 b 2 b 3. x 1 x 3 c n 2 d n 1 e n 1 x n 1 b n 1 5 c n 1 d n x n b n

CSE 160 Lecture 13. Numerical Linear Algebra

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

EE5120 Linear Algebra: Tutorial 1, July-Dec Solve the following sets of linear equations using Gaussian elimination (a)

Exercise Sketch these lines and find their intersection.

Math Lecture 26 : The Properties of Determinants

Computational Methods. Systems of Linear Equations

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Transcription:

Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis, 3rd Edition, K. Atkinson, W. Han, Applied Numerical Methods Using MATLAB, 1st Edition, W. Y. Yang, W. Cao, T.-S. Chung, J. Morris, and Applied Numerical Analysis Using MATLAB, L V. Fausett. 71

Revisiting GE Consider solving the linear system by Gaussian elimination without pivoting. We denote this linear system by Ax = b. The augmented matrix for this system is To eliminate x 1 from equations 2, 3, and 4, use multipliers 72

Revisiting GE This will introduce zeros into the positions below the diagonal in column 1, yielding To eliminate x2 from equations 3 and 4, use multipliers This reduces the augmented matrix to 73

Revisiting GE To eliminate x3 from equation 4, use the multiplier This reduces the augmented matrix to Return this to the familiar linear system 74

Revisiting GE Solving by back substitution, we obtain There is a surprising result involving matrices associated with this elimination process. Introduce the upper triangular matrix which resulted from the elimination process. Then introduce the lower triangular matrix 75

Revisiting GE This uses the multipliers introduced in the elimination process. Then In general, when the process of Gaussian elimination without pivoting is applied to solving a linear system Ax = b, we obtain A = LU with L and U constructed as above. For the case in which partial pivoting is used, we obtain the slightly modified result 76

Revisiting GE where L and U are constructed as before and P is a permutation matrix. For example, consider Then The matrix P A is obtained from A by switching around rows of A. The result LU = P A means that the LUfactorization is valid for the matrix A with its rows suitably permuted. 77

Revisiting GE Consequences: If we have a factorization with L lower triangular and U upper triangular, then we can solve the linear system Ax = b in a relatively straightforward way. The linear system can be written as Write this as a two stage process: 78

Revisiting GE The system Lg=b is a lower triangular system We solve it by forward substitution. Then we solve the upper triangular system Ux=g by back substitution. 79

Variants of the Gaussian Elim. To find the elements {l i,j } and {u i,j }, we multiply the right side matrices L and U and match the results with the corresponding elements in A. Multiplying the first row of L times all of the columns of U leads to Then multiplying rows 2, 3, 4 times the first column of U yields and we can solve for {l 2,1, l 3,1, l 4,1 }. We can continue this process, finding the second row of U and then the second column of L, and so on. For example, to solve for l 4,3, we need to solve for it in 80

Variants of the Gaussian Elim. Why do this? A hint of an answer is given by this last equation. If we had an n n matrix A, then we would find l n,n 1 by solving for it in the equation Embedded in this formula we have a dot product. This is in fact typical of this process, with the length of the inner products varying from one position to another. Recalling Section 2.4 and the discussion of dot products, we can evaluate this last formula by using a higher precision arithmetic and thus avoid many rounding errors. This leads to a variant of Gaussian elimination in which there are far fewer rounding errors. 81

Variants of the Gaussian Elim. With ordinary Gaussian elimination, the number of rounding errors is proportional to n 3. This reduces the number of rounding errors, with the number now being proportional to only n 2. This can lead to major increases in accuracy, especially for matrices A which are very sensitive to small changes. 82

Tridiagonal Matrices Thri-diagonal matrices occur very commonly in the numerical solution of partial differential equations, as well as in other applications (e.g. computing interpolating cubic spline functions). We factor A = LU, as before. But now L and U take very simple forms. Before proceeding, we note with an example that the same may not be true of the matrix inverse. 83

Tridiagonal Matrices Thri-diagonal matrices occur very commonly in the numerical solution of partial differential equations, as well as in other applications (e.g. computing interpolating cubic spline functions). We factor A = LU, as before. But now L and U take very simple forms. Before proceeding, we note with an example that the same may not be true of the matrix inverse. 84

Tridiagonal Matrices Define an n n tridiagonal matrix Then A 1 is given by Thus the sparse matrix A can (and usually does) have a dense inverse. 85

Tridiagonal Matrices To find the LU factorization of a tridiagonal matrix We factor A = LU, with Multiply these and match coefficients with A to find {α i,γ i }. 86

Tridiagonal Matrices By doing a few multiplications of rows of L times columns of U, we obtain the general pattern as follows. These are straightforward to solve 87

Tridiagonal Matrices To solve the linear system Or instead solve the two triangular systems Solving Lg=f: Solving Ux=g: 88

Operations Count Tridiagonal Factoring A = LU. Solving Lz=f and Ux=z: Thus, the total number of arithmetic operations is approximately 3n to factor A; and it takes about 5n to solve the linear system using the factorization of A. 89

Matlab Matrix Operations To obtain the LU-factorization of a matrix, including the use of partial pivoting, use the Matlab command lu. In particular returns the lower triangular matrix L, upper triangular matrix U, and permutation matrix P so that 90

91

92