Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
|
|
- Muriel Hunt
- 6 years ago
- Views:
Transcription
1 Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1
2 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization Pivoting Special Linear Systems Strictly Diagonally Dominant Matrices The LDM T and LDL T Factorizations Positive Definite Systems Tridiagonal Systems 2
3 Linear Systems of Equations For a linear system of equations a 1,1 x 1 + a 1,2 x a 1,n x n = b 1, a 2,1 x 1 + a 2,2 x a 2,n x n = b 2, a n,1 x 1 + a n,2 x a n,n x n = b n ; or equivalently, in matrix/vector notation: (A) n n (x) n 1 = (b) n 1 : a 1,1 a 1,2 a 1,n a 2,1 a 2,2 a 2,n x 1 x 2 = b 1 b 2, (1) a n,1 a n,2 a n,n x n b n find x = [x 1, x 2,, x n ] T such that the relation Ax = b holds 3
4 Gaussian Elimination A process of reducing the given linear system to a new linear system in which the unknowns x i s are systematically eliminated; The reduction is done via elementary row operations; It may be necessary to reorder the equations to accomplish this, ie, use equation pivoting 4
5 Elementary Row Operations Type 1: interchange two rows of a matrix: (i) (j); Type 2: replacing a row by the same row multiplied by a nonzero constant: (i) λ (i), λ \ {0}; Type 3: replacing a row by the same row plus a constant multiple of another row: (j) (j) + λ (i), λ 5
6 Gaussian Elimination: an Example (2) (2) (a 2,1 /a 1,1 )(1) (3) (3) (a 3,1 /a 1,1 )(1) = augmented matrix (3) (3) (a 3,2 /a 2,2 )(2) = x 3 = 3, x 2 6x 3 = 24 = x 2 = 2, x 1 + 4x 2 + 7x 3 = 30 = x 1 = 1 6
7 Triangular Systems: Forward Substitution Consider the following 2-by-2 lower triangular system: l 1,1 0 x 1 b 1 = l 2,1 l 2,2 x 2 b 2 If l 1,1 l 2,2 0, then x 1 = b 1 /l 1,1, x 2 = (b 2 l 2,1 x 1 )/l 2,2 The general procedure is obtained by solving the ith equation in Lx = b for x i : ( b i ) i 1 j=1 l i,jx j x i = l i,i n Flop count: (i 1) + (i 2) + }{{}}{{}}{{} 1 + }{{} 1 = n 2 i=2 mul add sub div 7
8 Triangular Systems: Back Substitution Consider the following 2-by-2 upper triangular system: u 1,1 u 1,2 x 1 b 1 = 0 u 2,2 x 2 b 2 If u 1,1 u 2,2 0, then x 2 = b 2 /u 2,2, x 1 = (b 1 u 1,2 x 2 )/u 1,1 The general procedure is obtained by solving the ith equation in Ux = b for x i : ( b i ) n j=i+1 u i,jx j x i = u i,i Flop count: n 2 8
9 The Algebra of Triangular Matrices Definition A unit triangular matrix is a triangular matrix with ones on the diagonal Properties The inverse of an upper (lower) triangular matrix is upper (lower) triangular; The product of two upper (lower) triangular matrices is upper (lower) triangular; The inverse of a unit upper (lower) triangular matrix is a unit upper (lower) triangular; The product of two unit upper (lower) triangular matrices is unit upper (lower) triangular 9
10 The LU Factorization 1 Compute a unit lower triangular L and an upper triangular U such that A = LU; 2 Solve Lz = b (forward substitution); 3 Solve Ux = z (back substitution) Example } {{ } A = } {{ } L } {{ } U For b = [1, 4] T, solving Lz = b yields z = [1, 2] T, and solving Ux = z yields x = [13/9, 2/3] T 10
11 LU Factorization: an Example Example (2) (2) (a 2,1 /a 1,1 )(1) (3) (3) (a 3,1 /a 1,1 )(1) = A Note that M 1 A = A 1 where M 1 is the unit lower triangular matrix A 1 M 1 = a 2,1 a 1,1 1 0 a 3,1 a 1,
12 (3) (3) (a 3,2 /a 2,2 )(2) = A 1 Note that M 2 A 1 = A 2 where M 2 is the unit lower triangular matrix A 2 M 2 = a 3,2 a 2,2 1 Hence, M 2 M 1 A = A 2, or equivalently, A = M1 1 M 2 1 }{{} L A 2 }{{} U 12
13 Also, M 1 1 = a 2,1 a 1,1 1 0 = a 2,1 a 1,1 1 0, a 3,1 a 1,1 0 1 a 3,1 a 1, M 1 2 = = 0 1 0, 0 a 3,2 a 2,2 1 0 a 3,2 a 2, L = M 1 1 M 1 2 = a 2,1 a 1,1 1 0 a 3,1 a 1,1 a 3,2 a 2,2 1 13
14 Gauss Transformation Suppose x n with x k 0 If τ T = [0,, 0 }{{} k, τ k+1,, τ n ], τ i = x i x k for (k + 1) i n and we define M k = I τ e T k, where e T k = [0,, 0 }{{} k 1, 1, 0,, 0], M k x = τ k τ n 0 1 x 1 x k x k+1 x n = x 1 x k 0 0, M k : Gauss transformation; τ k+1,, τ n : multipliers 14
15 Upper Triangularizing Assume that A n n, Gauss transformation M 1,, M n 1 can usually be found such that M n 1 M 2 M 1 A = U is upper triangular During the kth step: We are confronted with the matrix A (k 1) = M k 1 M 1 A that is upper triangular in columns 1 to k 1; The multipliers in M k are based on a (k 1) k+1,k,, a(k 1) n,k In particular, we need a (k 1) k,k 0 to proceed 15
16 The LU Factorization Let M 1,, M n 1 be the Gauss transforms such that M n 1 M 1 A = U is upper triangular If M k = I τ (k) e T k, then M 1 k = I + τ (k) e T k Hence, A = LU where L = M 1 1 M 1 n 1 L is a unit lower triangular matrix since each M 1 k triangular (p9) is unit lower Let τ (k) be the vector of multipliers associated with M k then upon termination, A[k + 1n, k] = τ (k) 16
17 The LU Factorization: an Algorithm Description Algorithm LUFactorization(A) input: an n-by-n (square) matrix L output: the LU factorization of A, provided it exists for i from 1 to n do for k from i + 1 to n do mult := a k,i /a i,i ; a k,i := mult; for j from i + 1 to n do a k,j := a k,j mult a i,j ; od; od; od; return A 17
18 The LU Factorization: Flop Count Definition A flop is a floating-point operation There is no standard agreement on this terminology We shall adopt the MATLAB convention, which is to count the total number of operations The flop count will include the total number of adds + multiplies + divides + subtracts For LU factorization: n n n 1 + i=1 k=i+1 j=i+1 2 = 2 3 n3 1 2 n2 1 6 n = O(n3 ) flops Recall that for forward substitution (p7) and back substitution (p8): O(n 2 ) flops 18
19 The LU Factorization: Sufficient Conditions Definition A leading principal submatrix of a matrix A is a matrix of the form for some 1 k n A k = a 1,1 a 1,2 a 1,k a 2,1 a 2,2 a 2,k a k,1 a k,2 a k,k Theorem A n n has an LU factorization if for 1 k n 1, det(a k ) 0 where the A k s are the leading principal submatrices of A If the LU factorization exists and A is nonsingular, then the LU factorization is unique and det(a) = u 1,1 u n,n 19
20 Partial Pivoting Row interchanges, ie, elementary row operation of type 1 (see p5) are needed when one of the pivot elements a (k) k,k = 0 Row interchanges are often necessary even when the pivot 0: If a (k) k,k a(k) j,k for some j (k + 1 j n), the multiplier m j,k = a (k) j,k /a(k) k,k will be very large Roundoff error that was produced in the computation of a (k) k,l will be multiplied by the large factor m j,k when computing a (k+1) j,l ; Roundoff error can also be dramatically increased in the back substitution step x k = a(k) k,n+1 n j=k+1 a(k) k,j when the pivot a (k) k,k is small a (k) k,k 20
21 Example For the linear system E 1 : x x 2 = 5917, E 2 : 5291x x 2 = 4678 (x 1 ) E = 1000, (x 2 ) E = 1000, (x 1 ) A = 1000, (x 2 ) A = 1001 (Gaussian elimination, four-digit arithmetic rounding) a (1) 1,1 = = m 2,1 = 5291/ Apply (E 2 ) (E 2 m 2,1 E 1 ): E 1 : x x , E 2 : x Back substitution = x However, since the pivot a 1,1 is small, x 1 (5917 (5914)(1001))/ = 1000 contains the small error 0001 multiplied by 5914/
22 Pivoting Strategy Select the largest element a (k) i,k that is below the pivot a(k) k,k : if a (k) max j=k,,n j,k = a (k), then swap row k with row k, and use a (k) k,k k,k to form the multiplier Example For the linear system {E 1, E 2 } on p21, since a 1,1 < a 2,1, rows 1 and 2 are swapped This leads to the correct solution x 1 = 1000, x 2 =
23 Permutation Matrices A permutation matrix is just the identity matrix with its rows re-ordered If P is a permutation and A is a matrix, then P A is a row permuted version of A, and AP is a column permuted version of A, eg, a 4,1 a 4,2 a 4,3 a 4,4 P = , P A = a 1,1 a 1,2 a 1,3 a 1,4 a 3,1 a 3,2 a 3,3 a 3, a 2,1 a 2,2 a 2,3 a 2,4 If P is a permutation, then P 1 = P T (P is orthogonal) 23
24 An interchange permutation is obtained by merely swapping two rows in the identity If P = E n E 1 and each E k is the identity with rows k and p(k) interchanged, then the vector p(1 : n) is a useful vector encoding of P, eg, [4, 4, 3, 4] is the vector encoding of P on p23 No floating point arithmetic is involved in a permutation operation However, permutation matrix operations often involve the irregular movement of data, and can represent a significant computational overhead 24
25 Partial Pivoting: the Basic Idea A = (1) (3), p[1] = 3: E 1 = 0 1 0, E 1 A = 2 4 2, M 1 = 1/3 1 0, M 1 E 1 A = /
26 (2) (3), p[2] = 3: E 2 = 0 0 1, M 2 = 0 1 0, M 2 E 2 M 1 E 1 A = / In general, upon completion we emerge with M n 1 E n 1 M 1 E 1 A = U, an upper triangular matrix To solve the linear system Ax = b, we Compute y = M n 1 E n 1 M 1 E 1 b; Solve the upper triangular system Ux = y All the information necessary to do this is contained in the array A and the vector p Cost O(n 2 ) comparisons associated with the search for the pivots The overall algorithm involves 2n 3 /3 flops 26
27 Scaled Partial Pivoting Scale the coefficients before deciding on row exchanges; Scale factor: for row i, S i = max j=1,2,,n a i,j ; Let k + 1 j n be such that a (k) j,k /S j = max i=k+1,,n a(k) i,k /S i If a (k) j,k /S j > a (k) k,k /S k then rows j and k are exchanged 27
28 Example For A = , s 1 = 421, s 2 = 102, s 3 = 109 (1) (3) a 1,1 s 1 = 0501, a 2,1 s 2 = 0393, a 3,1 s 3 = Compute the multipliers m 2,1, m 3,1, Cost Additional O(n 2 ) comparisons, and O(n 2 ) flops (divisions) 28
29 Complete Pivoting search all the entries a i,j, k i, j n, to find the entry with the largest magnitude Both row and column interchanges are performed to bring this entry into the pivot position only recommended for systems where accuracy is essential since it requires an additional O(n 3 ) comparisons 29
30 An Application: Multiple Right Hand Side Suppose A is nonsingular and n-by-n, and that B is n-by-p Consider the problem of finding X (n-by-p) so that AX = B If X = [x 1,, x p ] and B = [b 1,, b p ] are column partitions, then Compute P A = LU; for k from 1 to p do Solve Ly = P b k ; Solve Ux k = y; od; Note that A is factored just once If B = I n, then we emerge with a computed A 1 30
31 Strictly Diagonally Dominant Matrices Definition An n n matrix A is strictly diagonally dominant (sdd) n if a ii > a ij holds for each i = 1, 2,, n j=1,j i Theorem An sdd matrix is nonsingular Proof Suppose there is x 0 such that Ax = 0 Then there is 1 k n such that x k = max 1 j n x j > 0 Since x is a solution to Ax = b, a k,k x k + n j=1,j k a k,jx j = 0 Hence, a k,k n j=1,j k a k,j x j / x k n j=1,j n a k,j A contradiction since A is sdd Hence, x = 0 is the only solution to Ax = b Equivalently, A is nonsingular 31
32 Theorem Let A be an sdd matrix Then Gaussian elimination can be performed on any linear system of the form Ax = b to obtain its unique solution without row or column interchanges, and the computations are stable to the growth of roundoff errors Proof sketch A (1) type 3 A (2) type 3 A (n) sdd sdd sdd 32
33 LDM T and LDL T Factorizations n n are Theorem If all the leading principal submatrices of A nonsingular, then there exist unique lower triangular matrices L and M and a unique diagonal matrix D = diag(d 1,, d n ) such that A = LDM T Proof By the theorem on p19, A = LU exists Set D = diag(d 1,, d n ) with d i = u i,i for 1 i n Since D is nonsingular and M T = D 1 U is unit upper triangular, A = LU = LD(D 1 U) = LDM T Uniqueness follows from the uniqueness of the LU factorization Theorem If A = LDM T is the LDM T factorization of a nonsingular symmetric matrix A, then L = M 33
34 Example For the matrix A = 4 5 8, The LU factorization of A is L = 4 1 0, U = Set D = diag(u) =
35 Then the matrix M T in the LDM T factorization of A is M T = Since A is symmetric, M T = L T and we have the LDL T factorization of A Remark It is possible to compute the matrices L, D, and M directly, instead of computing them via the LU factorization 35
36 Positive Definite Matrices Definition A matrix A is positive definite if x T Ax > 0 for every nonzero n-dimensional column vector x Theorem If A is an n n positive definite matrix, then a A is nonsingular; b a i,i > 0, for each 1 i n; c max 1 k,j n a k,j max 1 i n a i,i ; d (a i,j ) 2 < a i,i a j,j, for each i j Theorem A symmetric matrix A is positive definite if and only if each of its leading principal submatrices has a positive determinant 36
37 Example For A = 1 2 1, det A 1 = det 2 = 2 > 0, det A 2 = det det A 3 = det A = 4 > 0 = 3 > 0, Since A is also symmetric, A is positive definite Theorem A symmetric matrix A is positive definite if and only if Gaussian elimination without row exchanges can be performed on the linear system Ax = b with all the pivot elements positive Moreover, in this case, the computations are stable with respect to the growth of roundoff errors 37
38 Theorem (Cholesky Factorization) If A n n is positive definite, then there exists a unique lower triangular G n n with positive diagonal entries such that A = GG T Proof By the second theorem of p33, there exists a unit lower triangular L and a diagonal D = diag(d 1,, d n ) such that A = LDL T Since the d k are positive, the matrix G = Ldiag( d 1,, d n ) is real lower triangular with positive diagonal entries It also satisfies A = GG T Uniqueness follows from the uniqueness of LDL T factorization 38
39 Example For the positive definite matrix A on p37, the matrices L and D in the LDL T factorization for A are L = 1/2 1 0, D = 0 3/ / /3 Hence, the matrix G = Ldiag( d 1, d 2, d 3 ) in the GG T factorization of A is G = 1/2 2 1/ /3 6 2/3 3 39
40 Tridiagonal Matrices Matrices of the form A = a 1,1 a 1,2 0 0 a 2,1 a 2,2 a 2,3 0 a 3,2 a 3,3 a 3,4 0 a n 1,n 0 0 a n,n 1 a n,n 40
41 Now suppose A can be factored into the triangular matrices L and U Suppose that the matrices can be found in the form L = l 1,1 0 0 l 2,1 l 2, l n,n 1 l n,n, U = 1 u 1, u n 1,n The zero entries of A are automatically generated by LU 41
42 Multiplying A = LU, we also find the following conditions: a 1,1 = l 1,1 ; (2) a i,i 1 = l i,i 1, for each i = 2, 3,, n; (3) a i,i = l i,i 1 u i 1,i + l i,i, for each i = 2, 3,, n; (4) a i,i+1 = l i,i u i,i+1,, for each i = 1, 2,, n 1 (5) This system is straightforward to solve: (2) and (3) give us l 1,1 and the off-diagonal entries of L, (4) and (5) are used alternately to obtain the remaining entries of L and U This solution technique is often referred to as Crout factorization Cost (5n 4) multiplications/divisions; (3n 3) additions/subtractions 42
1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationSolving Linear Systems Using Gaussian Elimination. How can we solve
Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationEngineering Computation
Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian
More information1.Chapter Objectives
LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More information5 Solving Systems of Linear Equations
106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +
More informationRoundoff Analysis of Gaussian Elimination
Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationSolving Dense Linear Systems I
Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen
More informationNumerical methods for solving linear systems
Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More informationSolving linear systems (6 lectures)
Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More information4.2 Floating-Point Numbers
101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationMODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].
Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights
More informationLU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU
LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationSolving Linear Systems of Equations
1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly
More informationThe System of Linear Equations. Direct Methods. Xiaozhou Li.
1/16 The Direct Methods xiaozhouli@uestc.edu.cn http://xiaozhouli.com School of Mathematical Sciences University of Electronic Science and Technology of China Chengdu, China Does the LU factorization always
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationExample: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3
Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination
More informationNumerical Methods I: Numerical linear algebra
1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,
More information1.5 Gaussian Elimination With Partial Pivoting.
Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationACM106a - Homework 2 Solutions
ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m
More informationMODEL ANSWERS TO THE THIRD HOMEWORK
MODEL ANSWERS TO THE THIRD HOMEWORK 1 (i) We apply Gaussian elimination to A First note that the second row is a multiple of the first row So we need to swap the second and third rows 1 3 2 1 2 6 5 7 3
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More informationMatrix Factorization and Analysis
Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More informationLectures on Linear Algebra for IT
Lectures on Linear Algebra for IT by Mgr. Tereza Kovářová, Ph.D. following content of lectures by Ing. Petr Beremlijski, Ph.D. Department of Applied Mathematics, VSB - TU Ostrava Czech Republic 2. Systems
More information5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)
5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationMATH2210 Notebook 2 Spring 2018
MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................
More informationGaussian Elimination -(3.1) b 1. b 2., b. b n
Gaussian Elimination -() Consider solving a given system of n linear equations in n unknowns: (*) a x a x a n x n b where a ij and b i are constants and x i are unknowns Let a n x a n x a nn x n a a a
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationNumerical Linear Algebra
Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationII. Determinant Functions
Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function
More informationExample: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods
Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationLinear Systems and Matrices
Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......
More informationNumerical Linear Algebra
Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the
More informationy b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1
LU decomposition -- manual demonstration Instructor: Nam Sun Wang lu-manualmcd LU decomposition, where L is a lower-triangular matrix with as the diagonal elements and U is an upper-triangular matrix Just
More informationHani Mehrpouyan, California State University, Bakersfield. Signals and Systems
Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationChapter 1 Matrices and Systems of Equations
Chapter 1 Matrices and Systems of Equations System of Linear Equations 1. A linear equation in n unknowns is an equation of the form n i=1 a i x i = b where a 1,..., a n, b R and x 1,..., x n are variables.
More informationLinear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4
Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x
More informationLinear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation
More information