LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
|
|
- Byron Woods
- 5 years ago
- Views:
Transcription
1 AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22
2 LU Factorization LU factorization is the most common way of solving linear systems! Ax = b LUx = b Let y Ux, then Ly = b: solve for y via forward substitution 1 Then solve for Ux = y via back substitution 1 y = L 1 b is the transformed right-hand side vector (ie Mb from earlier) that we are familiar with from Gaussian elimination
3 LU Factorization Next question: How should we determine M 1, M 2,, M n 1? We need to be able to annihilate selected entries of A, below the diagonal in order to obtain an upper-triangular matrix To do this, we use elementary elimination matrices Let L j denote j th elimination matrix (we use L j rather than M j from now on as elimination matrices are lower triangular)
4 LU Factorization Let X ( L j 1 L j 2 L 1 A) denote matrix at the start of step j, and let x (:,j) R n denote column j of X Then we define L j such that L j x (:,j) x j+1,j /x jj x nj /x jj 0 1 x 1j x jj x j+1,j x nj = x 1j x jj 0 0
5 LU Factorization To simplify notation, we let l ij x ij x jj in order to obtain L j l j+1,j l nj 0 1
6 LU Factorization Using elementary elimination matrices we can reduce A to upper triangular form, one column at a time Schematically, for a 4 4 matrix, we have L L A L 1A L 2L 1A Key point: L k does not affect columns 1, 2,, k 1 of L k 1 L k 2 L 1 A
7 LU Factorization After n 1 steps, we obtain the upper triangular matrix U = L n 1 L 2 L 1 A U =
8 LU Factorization Finally, we wish to form the factorization A = LU, hence we need L = (L n 1 L 2 L 1 ) 1 = L 1 1 L 1 2 L 1 n 1 This turns out to be surprisingly simple due to two strokes of luck! First stroke of luck: L 1 j is obtained simply by negating the subdiagonal entries of L j L j l j+1,j l nj 0 1, L 1 j l j+1,j l nj 0 1
9 LU Factorization Explanation: Let l j [0,, 0, l j+1,j,, l nj ] T so that L j = I l j e T j Now consider L j (I + l j e T j ): L j (I+l j e T j ) = (I l j e T j )(I+l j e T j ) = I l j e T j l j e T j = I l j (e T j l j )e T j Also, (e T j l j ) = 0 (why?) so that L j (I + l j e T j ) = I By the same argument (I + l j e T j )L j = I, and hence L 1 j = (I + l j e T j )
10 LU Factorization Next we want to form the matrix L L 1 1 L 1 2 L 1 n 1 Note that we have L 1 j L 1 j+1 = (I + l j ej T )(I + l j+1 ej+1) T = I + l j e T j + l j+1 e T j+1 + l j (e T j l j+1 )e T j+1 = I + l j e T j + l j+1 e T j+1 Interestingly, this convenient result doesn t hold for L 1 j+1 L 1 j, why?
11 LU Factorization Similarly, L 1 j L 1 j+1 L 1 j+2 = (I + l j ej T + l j+1 ej+1)(i T + l j+2 ej+2) T = I + l j e T j + l j+1 e T j+1 + l j+2 e T j+2 That is, to compute the product L 1 1 L 1 2 L 1 the subdiagonals for j = 1, 2,, n 1 n 1 we simply collect
12 LU Factorization Hence, second stroke of luck: L L 1 1 L 1 2 L 1 n 1 = 1 l 21 1 l 31 l 32 1 l n1 l n2 l n,n 1 1
13 LU Factorization Therefore, basic LU factorization algorithm is 1: U = A, L = I 2: for j = 1 : n 1 do 3: for i = j + 1 : n do 4: l ij = u ij /u jj 5: for k = j : n do 6: u ik = u ik l ij u jk 7: end for 8: end for 9: end for Note that the entries of U are updated each iteration so at the start of step j, U = L j 1 L j 2 L 1 A Here line 4 comes straight from the definition l ij u ij u jj
14 LU Factorization Line 6 accounts for the effect of L j on columns k = j, j + 1,, n of U For k = j : n we have L j u (:,k) l j+1,j l nj 0 1 u 1k u jk u j+1,k u nk = u 1k u jk u j+1,k l j+1,j u jk u nk l nj u jk The vector on the right is the updated k th column of U, which is computed in line 6
15 LU Factorization LU Factorization involves a triply-nested for-loop, hence O(n 3 ) calculations Careful operation counting shows LU factorization requires 1 3 n3 additions and 1 3 n3 multiplications, 2 3 n3 operations in total
16 Solving a linear system using LU Hence to solve Ax = b, we perform the following three steps: Step 1: Factorize A into L and U: 2 3 n3 Step 2: Solve Ly = b by forward substitution: n 2 Step 3: Solve Ux = y by back substitution: n 2 Total work is dominated by Step 1, 2 3 n3
17 Solving a linear system using LU An alternative approach would be to compute A 1 explicitly and evaluate x = A 1 b, but this is a bad idea! Question: How would we compute A 1?
18 Solving a linear system using LU Answer: Let a(:,k) inv denote the kth column of A 1, then a inv satisfy Aa(:,k) inv = e k (:,k) must Therefore to compute A 1, we first LU factorize A, then back/forward substitute for rhs vector e k, k = 1, 2,, n The n back/forward substitutions alone require 2n 3 operations, inefficient! A rule of thumb in Numerical Linear Algebra: It is almost always a bad idea to compute A 1 explicitly
19 Solving a linear system using LU Another case where LU factorization is very helpful is if we want to solve Ax = b i for several different right-hand sides b i, i = 1,, k We incur the 2 3 n3 cost only once, and then each subequent forward/back subsitution costs only 2n 2 Makes a huge difference if n is large!
20 Stability of Gaussian Elimination There is a problem with the LU algorithm presented above Consider the matrix A = [ ] A is nonsingular, well-conditioned (κ(a) 262) but LU factorization fails at first step (division by zero)
21 Stability of Gaussian Elimination LU factorization doesn t fail for [ ] A = 1 1 but we get [ L = ] [ , U = ]
22 Stability of Gaussian Elimination Let s suppose that F (a floating point number) and that round( ) = Then in finite precision arithmetic we get [ ] [ ] L = 10 20, Ũ =
23 Stability of Gaussian Elimination Hence due to rounding error we obtain LŨ = [ ] which is not close to A = [ ] Then, for example, let b = [3, 3] T Using LŨ, we get x = [3, 3]T True answer is x = [0, 3] T Hence large relative error (rel err = 1) even though the problem is well-conditioned
24 Stability of Gaussian Elimination In this example, standard Gaussian elimination yields a large residual Or equivalently, it yields the exact solution to a problem corresponding to a large input perturbation: b = [0, 3] T Hence unstable algorithm! In this case the cause of the large error in x is numerical instability, not ill-conditioning To stabilize Gaussian elimination, we need to permute rows, ie perform pivoting
25 Pivoting Recall the Gaussian elimination process x jj But we could just as easily do x ij x jj x ij 0
26 Partial Pivoting The entry x ij is called the pivot, and flexibility in choosing the pivot is essential otherwise we can t deal with: [ ] 0 1 A = 1 1 From a numerical stability point of view, it is crucial to choose the pivot to be the largest entry in column j: partial pivoting 2 This ensures that each l ij entry which acts as a multiplier in the LU factorization process satisfies l ij 1 2 Full pivoting refers to searching through columns j : n for the largest entry; this is more expensive and only marginal benefit to stability in practice
27 Partial Pivoting To maintain the triangular LU structure, we permute rows by premultiplying by permutation matrices x ij P 1 x ij L 1 x ij 0 0 Pivot selection Row interchange In this case P 1 = and each P j is obtained by swapping two rows of I
28 Partial Pivoting Therefore, with partial pivoting we obtain L n 1 P n 1 L 2 P 2 L 1 P 1 A = U It can be shown (we omit the details here, see Trefethen & Bau) that this can be rewritten as where 3 P P n 1 P 2 P 1 PA = LU Theorem: Gaussian elimination with partial pivoting produces nonsingular factors L and U if and only if A is nonsingular 3 The L matrix here is lower triangular, but not the same as L in the non-pivoting case: we have to account for the row swaps
29 Partial Pivoting Pseudocode for LU factorization with partial pivoting (blue text is new): 1: U = A, L = I, P = I 2: for j = 1 : n 1 do 3: Select i( j) that maximizes u ij 4: Interchange rows of U: u (j,j:n) u (i,j:n) 5: Interchange rows of L: l (j,1:j 1) l (i,1:j 1) 6: Interchange rows of P: p (j,:) p (i,:) 7: for i = j + 1 : n do 8: l ij = u ij /u jj 9: for k = j : n do 10: u ik = u ik l ij u jk 11: end for 12: end for 13: end for Again this requires 2 3 n3 floating point operations
30 Partial Pivoting: Solve Ax = b To solve Ax = b using the factorization PA = LU: Multiply through by P to obtain PAx = LUx = Pb Solve Ly = Pb using forward substitution Then solve Ux = y using back substitution
31 Partial Pivoting in Python Python s scipylinalglu function can do LU factorization with pivoting Python 275 (default, Mar , 22:15:05) [GCC 421 Compatible Apple LLVM 50 (clang )] on darwin Type "help", "copyright", "credits" or "license" for more information >>> import numpy as np >>> import scipylinalg >>> a=nprandomrandom((4,4)) >>> a array([[ , , , ], [ , , , ], [ , , , ], [ , , , ]]) >>> (p,l,u)=scipylinalglu(a) >>> p array([[ 0, 0, 1, 0], [ 0, 1, 0, 0], [ 1, 0, 0, 0], [ 0, 0, 0, 1]]) >>> l array([[ 1, 0, 0, 0 ], [ , 1, 0, 0 ], [ , , 1, 0 ], [ , , , 1 ]]) >>> u array([[ , , , ], [ 0, , , ], [ 0, 0, , ], [ 0, 0, 0, ]])
32 Stability of Gaussian Elimination Numerical stability of Gaussian Elimination has been an important research topic since the 1940s Major figure in this field: James H Wilkinson (English numerical analyst, ) Showed that for Ax = b with A R n n : Gaussian elimination without partial pivoting is numerically unstable (as we ve already seen) Gaussian elimination with partial pivoting satisfies r A x 2n 1 n 2 ɛ mach
33 Stability of Gaussian Elimination That is, pathological cases exist where the relative residual, r / A x, grows exponentially with n due to rounding error Worst case behavior of Gaussian Elimination with partial pivoting is explosive instability but such pathological cases are extremely rare! In over 50 years of Scientific Computation, instability has only been encountered due to deliberate construction of pathological cases In practice, Gaussian elimination is stable in the sense that it produces a small relative residual
34 Stability of Gaussian Elimination In practice, we typically obtain r A x nɛ mach, ie grows only linearly with n, and is scaled by ɛ mach Combining this result with our inequality: x x κ(a) r A x implies that in practice Gaussian elimination gives small error for well-conditioned problems!
35 Cholesky Factorization
36 Cholesky factorization Suppose that A R n n is an SPD matrix, ie: Symmetric: A T = A Positive Definite: for any v 0, v T Av > 0 Then the LU factorization of A can be arranged so that U = L T, ie A = LL T (but in this case L may not have 1s on the diagonal) Consider the 2 2 case: [ ] [ a11 a 21 l11 0 = a 21 a 22 l 21 l 22 ] [ l11 l 21 0 l 22 ] Equating entries gives l 11 = a 11, l 21 = a 21 /l 11, l 22 = a 22 l 2 21
37 Cholesky factorization This approach of equating entries can be used to derive the Cholesky factorization for the general n n case 1: L = A 2: for j = 1 : n do 3: l jj = l jj 4: for i = j + 1 : n do 5: l ij = l ij /l jj 6: end for 7: for k = j + 1 : n do 8: for i = k : n do 9: l ik = l ik l ij l kj 10: end for 11: end for 12: end for
38 Cholesky factorization Notes on Cholesky factorization: For an SPD matrix A, Cholesky factorization is numerically stable and does not require any pivoting Operation count: 1 3 n3 operations in total, ie about half as many as Gaussian elimination Only need to store L, hence uses less memory than LU
39 Sparse Matrices
40 Sparse Matrices In applications, we often encounter sparse matrices A prime example is in discretization of partial differential equations (covered in the next section) Sparse matrix is not precisely defined, roughly speaking it is a matrix that is mostly zeros From a computational point of view it is advantageous to store only the non-zero entries The set of non-zero entries of a sparse matrix is referred to as its sparsity pattern
41 Sparse Matrices A = A sparse = (1,1) 2 (1,2) 2 (2,2) 1 (1,3) 2 (3,3) 1 (1,4) 2 (4,4) 1 (1,5) (5,5) 1
42 Sparse Matrices From a mathematical point of view, sparse matrices are no different from dense matrices But from Sci Comp perspective, sparse matrices require different data structures and algorithms for computational efficiency eg, can apply LU or Cholesky to sparse A, but new non-zeros (ie outside sparsity pattern of A) are introduced in the factors These new non-zero entries are called fill-in many methods exist for reducing fill-in by permuting rows and columns of A
43 QR Factorization
44 QR Factorization A square matrix Q R n n is called orthogonal if its columns and rows are orthonormal vectors Equivalently, Q T Q = QQ T = I Orthogonal matrices preserve the Euclidean norm of a vector, ie Qv 2 2 = v T Q T Qv = v T v = v 2 2 Hence, geometrically, we picture orthogonal matrices as reflection or rotation operators Orthogonal matrices are very important in scientific computing, norm-preservation implies no amplification of numerical error!
45 QR Factorization A matrix A R m n, m n, can be factorized into where A = QR Q R m m is orthogonal [ ] ˆR R R m n 0 ˆR R n n is upper-triangular QR is very good for solving overdetermined linear least-squares problems, Ax b 4 4 QR can also be used to solve a square system Ax = b, but requires 2 as many operations as Gaussian elimination hence not the standard choice
Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries
More informationAM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization
AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationAM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization
AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationRoundoff Analysis of Gaussian Elimination
Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More information5 Solving Systems of Linear Equations
106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +
More informationAx = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector
More informationlecture 2 and 3: algorithms for linear algebra
lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationLU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU
LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal
More informationLinear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4
Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More information1 GSW Sets of Systems
1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationPivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3
Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.
More informationlecture 3 and 4: algorithms for linear algebra
lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationCS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra
CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x
More informationSolving Dense Linear Systems I
Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen
More informationNumerical Linear Algebra
Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the
More informationMatrix Factorization and Analysis
Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their
More informationSherman-Morrison-Woodbury
Week 5: Wednesday, Sep 23 Sherman-Mrison-Woodbury The Sherman-Mrison fmula describes the solution of A+uv T when there is already a factization f A. An easy way to derive the fmula is through block Gaussian
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationSolving Linear Systems of Equations
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A
More informationApplied Linear Algebra
Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants
More information1.5 Gaussian Elimination With Partial Pivoting.
Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationLinear Least squares
Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More information5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)
5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationI-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x
Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker
More informationOutline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,
Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences
More informationNUMERICAL MATHEMATICS & COMPUTING 7th Edition
NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationV C V L T I 0 C V B 1 V T 0 I. l nk
Multifrontal Method Kailai Xu September 16, 2017 Main observation. Consider the LDL T decomposition of a SPD matrix [ ] [ ] [ ] [ ] B V T L 0 I 0 L T L A = = 1 V T V C V L T I 0 C V B 1 V T, 0 I where
More informationAMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC
Lecture 17 Copyright by Hongyun Wang, UCSC Recap: Solving linear system A x = b Suppose we are given the decomposition, A = L U. We solve (LU) x = b in 2 steps: *) Solve L y = b using the forward substitution
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationMODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].
Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights
More information5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...
5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012
More informationLecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico
Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition
More informationSolving linear systems (6 lectures)
Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear
More informationChapter 2 - Linear Equations
Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationLecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014
Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationNumerical Linear Algebra
Numerical Linear Algebra Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Sep 19th, 2017 C. Hurtado (UIUC - Economics) Numerical Methods On the Agenda
More informationLecture 4: Linear Algebra 1
Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationNumerical Methods I: Numerical linear algebra
1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,
More informationLinear Analysis Lecture 16
Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationPOLI270 - Linear Algebra
POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and
More information4.2 Floating-Point Numbers
101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base
More information