PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Similar documents
1.Chapter Objectives

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Process Model Formulation and Solution, 3E4

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

The Solution of Linear Systems AX = B

Numerical Linear Algebra

Linear Algebraic Equations

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

CE 206: Engineering Computation Sessional. System of Linear Equations

Linear Algebraic Equations

Linear System of Equations

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Computational Economics and Finance

Numerical Linear Algebra

CS 246 Review of Linear Algebra 01/17/19

Solving Linear Systems of Equations

MATH 3511 Lecture 1. Solving Linear Systems 1

Linear System of Equations

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

LINEAR SYSTEMS (11) Intensive Computation

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

6 Linear Systems of Equations

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

9.1 Preconditioned Krylov Subspace Methods

Solving Linear Systems

Scientific Computing: Dense Linear Systems

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

MATLAB Project: LU Factorization

CLASSICAL ITERATIVE METHODS

SOLVING LINEAR SYSTEMS

Next topics: Solving systems of linear equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Scientific Computing

Solution of Linear Equations

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Numerical Methods I: Numerical linear algebra

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

Solving Dense Linear Systems I

Numerical Analysis: Solving Systems of Linear Equations

CS 323: Numerical Analysis and Computing

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Computational Methods. Systems of Linear Equations

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Iterative Methods. Splitting Methods

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Computational Linear Algebra

Linear algebra. Unit I. Linear Algebra. Tasks of computational linear algebra. Linear systems: mathematical facts. Linear system: numerical issue

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

TMA4125 Matematikk 4N Spring 2017

The following steps will help you to record your work and save and submit it successfully.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

JACOBI S ITERATION METHOD

Solving Linear Systems of Equations

Direct Methods for Solving Linear Systems. Matrix Factorization

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Direct Methods for solving Linear Equation Systems

9. Iterative Methods for Large Linear Systems

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

ACM106a - Homework 2 Solutions

MATRICES. a m,1 a m,n A =

Solving a System of Equations

Review Questions REVIEW QUESTIONS 71

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Numerical Linear Algebra

COURSE Iterative methods for solving linear systems

Solving Linear Systems

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Chapter 2 - Linear Equations

Solving Linear Systems of Equations

Solving Linear Systems Using Gaussian Elimination. How can we solve

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebra and Matrix Inversion

Computational Linear Algebra

Gaussian Elimination and Back Substitution

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

1 Number Systems and Errors 1

Matrix decompositions

Linear Systems of n equations for n unknowns

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Introduction to Mathematical Programming

14.2 QR Factorization with Column Pivoting

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Scientific Computing: Solving Linear Systems

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Matrix decompositions

Transcription:

Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Chapter Objectives Understanding that LU factorization involves decomposing the coefficient matrix into two triangular matrices that can then be used to efficiently evaluate different right-hand-side vectors. Knowing how to express Gauss elimination as an LU factorization. Given an LU factorization, knowing how to evaluate multiple right-hand-side vectors. Recognizing that Cholesky s method provides an efficient way to decompose a symmetric matrix and that the resulting triangular matrix and its transpose can be used to evaluate right-hand-side vectors efficiently. Understanding in general terms what happens when MATLAB s backslash operator is used to solve linear systems. 2

LU Factorization Recall that the forward-elimination step of Gauss elimination comprises the bulk of the computational effort. LU factorization methods separate the time-consuming elimination of the matrix [A] from the manipulations of the right-hand-side [b]. Once [A] has been factored (or decomposed), multiple righthand-side vectors can be evaluated in an efficient manner. 3

LU Factorization LU factorization involves two steps: Factorization to decompose the [A] matrix into a product of a lower triangular matrix [L] and an upper triangular matrix [U]. [L] has 1 for each entry on the diagonal. Substitution to solve for {x} Gauss elimination can be implemented in LU factorization 4

Gauss Elimination as LU Factorization [A]{x}={b} can be rewritten as [L][U]{x}={b} using LU factorization. The LU factorization algorithm requires the same total flops as for Gauss elimination. The main advantage is once [A] is decomposed, the same [L] and [U] can be used for multiple {b} vectors. MATLAB s lu function can be used to generate the [L] and [U] matrices: [L, U] = lu(a) 5

a a a a21 an 1 a 11 12 1n nn ( f a a ) 21 21 11 ( f n a a ) 1 n1 11 a a a 0 a a a a 32 0 a n2 a 11 12 1n 22 23 2n nn ( f a a ) 32 32 22 ( fn2 a n2 a 22) a11 a12 a1 n 0 a 22 a 23 0 a 33 0 0 a n3 a nn ( f a a ) 43 43 33 ( fn3 a n3 a 33) a a a 0 a a a 0 a a 0 0 0 0 0 11 12 1n 22 23 2n 33 3n ( n 1) a nn U 1 0 0 f21 1 0 0 f32 1 0 f43 0 fn 1 fn2 fn3 fnn 1 1 L It can be proved that A LU. 6

Gauss Elimination as LU Factorization (cont) To solve [A]{x}={b}, first decompose [A] to get [L][U]{x}={b} Set up and solve [L]{d}={b}, where {d} can be found using forward substitution. Set up and solve [U]{x}={d}, where {x} can be found using backward substitution. In MATLAB: [L, U] = lu(a) d = L\b x = U\d 7

Cholesky Factorization Symmetric systems occur commonly in both mathematical and engineering/science problem contexts, and there are special solution techniques available for such systems. The Cholesky factorization is one of the most popular of these techniques, and is based on the fact that a symmetric matrix can be decomposed as [A]= [U] T [U], where T stands for transpose. The rest of the process is similar to LU decomposition and Gauss elimination, except only one matrix, [U], needs to be stored. 8

U u u u u 0 u u u 0 0 u u 0 0 0 u 11 12 13 14 22 23 24 33 34 44 U T u11 0 0 0 u11 u12 u13 u14 u12 u22 0 0 0 u22 u23 u 24 U u13 u23 u33 0 0 0 u33 u34 u14 u24 u34 u44 0 0 0 u44 u u u u u u u 2 11 11 12 11 13 11 14 2 2 u12 u22 u12u13 u22u23 u12u14 u22u24 2 2 2 u13 u23 u33 u13u14 u23u24 u33u34 2 2 2 2 u14 u24 u34 u44 a a a a a a a a a a 11 12 13 14 22 23 24 33 34 44 i1 2 ii ii ki k 1 u a u i1 a u u u j i n ij ki kj k 1 and ij for 1,, uii 9

MATLAB MATLAB can perform a Cholesky factorization with the builtin chol command: U = chol(a) MATLAB s left division operator \ examines the system to see which method will most efficiently solve the problem. This includes trying banded solvers, back and forward substitutions, Cholesky factorization for symmetric systems. If these do not work and the system is square, Gauss elimination with partial pivoting is used. 10

Part 3 Chapter 11 Matrix Inverse and Condition PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Chapter Objectives Knowing how to determine the matrix inverse in an efficient manner based on LU factorization. Understanding how the matrix inverse can be used to assess stimulus-response characteristics of engineering systems. Understanding the meaning of matrix and vector norms and how they are computed. Knowing how to use norms to compute the matrix condition number. Understanding how the magnitude of the condition number can be used to estimate the precision of solutions of linear algebraic equations. 12

Matrix Inverse Recall that if a matrix [A] is square, there is another matrix [A] -1, called the inverse of [A], for which [A][A] -1 =[A] -1 [A]=[I] The inverse can be computed in a column by column fashion by generating solutions with unit vectors as the right-hand-side constants: 1 0 0 A x 1 0 1 0 A x 2 A 1 x 1 x 2 x 3 0 0 1 A x 3 13

Matrix Inverse (cont) Recall that LU factorization can be used to efficiently evaluate a system for multiple right-hand-side vectors - thus, it is ideal for evaluating the multiple unit vectors needed to compute the inverse. 14

Stimulus-Response Computations Many systems can be modeled as a linear combination of equations, and thus written as a matrix equation: Interactions response stimuli The system response can thus be found using the matrix inverse. 15

Vector and Matrix Norms A norm is a real-valued function that provides a measure of the size or length of multi-component mathematical entities such as vectors and matrices. Vector norms and matrix norms may be computed differently. 16

Vector Norms For a vector {X} of size n, the p-norm is: n p X p x i i1 Important examples of vector p-norms include: 1/ p n p 1:sum of the absolute values X 1 x i i1 n i1 p 2 :Euclidian norm (length) X 2 X e x i 2 p :maximum magnitude X max x i 1in 17

Matrix Norms Common matrix norms for a matrix [A] include: column - sum norm A 1 Frobenius norm A f a ij 2 row - sum norm spectral norm (2 norm) A n max a ij 1 jn i1 n n i1 j1 n max a ij 1in j1 1/2 A 2 max Note - max is the largest eigenvalue of [A] T [A]. 18

Matrix Condition Number The matrix condition number Cond[A] is obtained by calculating Cond[A]= A A -1 In can be shown that: X X Cond A A A The relative error of the norm of the computed solution can be as large as the relative error of the norm of the coefficients of [A] multiplied by the condition number. If the coefficients of [A] are known to t digit precision, the solution [X] may be valid to only t-log 10 (Cond[A]) digits. 19

MATLAB Commands MATLAB has built-in functions to compute both norms and condition numbers: norm(x,p) Compute the p norm of vector X, where p can be any number, inf, or fro (for the Euclidean norm) norm(a,p) Compute a norm of matrix A, where p can be 1, 2, inf, or fro (for the Frobenius norm) cond(x,p) or cond(a,p) Calculate the condition number of vector X or matrix A using the norm specified by p. If p is omitted, it is automatically set to 2. 20

The Hilbert matrix, 1 1 1 1 2 3 n 1 1 1 1 2 3 4 n1 1 1 1 1 n n1 n2 2n1 21

Part 3 Chapter 12 Iterative Methods PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Chapter Objectives Understanding the difference between the Gauss-Seidel and Jacobi methods. Knowing how to assess diagonal dominance and knowing what it means. Recognizing how relaxation can be used to improve convergence of iterative methods. Understanding how to solve systems of nonlinear equations with successive substitution and Newton-Raphson. 23

Gauss-Seidel Method The Gauss-Seidel method is the most commonly used iterative method for solving linear algebraic equations [A]{x}={b}. The method solves each equation in a system for a particular variable, and then uses that value in later equations to solve later variables. For a 3x3 system with nonzero elements along the diagonal, for example, the j th iteration values are found from the j-1 th iteration using: a11x1 a12 x2 a13 x3 b1 a x a x a x b a x a x a x b 21 1 22 2 23 3 2 31 1 32 2 33 3 3 x x x j 1 j 2 j 3 b a x a x a j1 j1 1 12 2 13 3 11 j j1 2 21 1 23 3 b a x a x a 22 j j 3 31 1 32 2 b a x a x a 33 24

Jacobi Iteration The Jacobi iteration is similar to the Gauss-Seidel method, except the j-1th information is used to update all variables in the jth iteration: a) Gauss-Seidel b) Jacobi 25

Convergence The convergence of an iterative method can be calculated by determining the relative percent change of each element in {x}. For example, for the i th element in the j th iteration, a,i x i j x i j1 x i j 100% The method is ended when all elements have converged to a set tolerance. 26

Diagonal Dominance The Gauss-Seidel method may diverge, but if the system is diagonally dominant, it will definitely converge. Diagonal dominance means: a ii n j1 ji a ij 27

MATLAB Program x 1 j b 1 a 12 x 2 j1 a 13 x 3 j1 a 11 x 2 j b 2 a 21 x 1 j a 23 x 3 j1 a 22 x 3 j b 3 a 31 x 1 j a 32 x 2 j a 33 28

Relaxation To enhance convergence, an iterative program can introduce relaxation where the value at a particular iteration is made up of a combination of the old value and the newly calculated value: x new i x new old i 1 x i where is a weighting factor that is assigned a value between 0 and 2. 0<<1: underrelaxation =1: no relaxation 1< 2: overrelaxation 29

Nonlinear Systems Nonlinear systems can also be solved using the same strategy as the Gauss-Seidel method - solve each system for one of the unknowns and update each unknown using information from the previous iteration. This is called successive substitution. 30

31