Process Model Formulation and Solution, 3E4

Size: px
Start display at page:

Download "Process Model Formulation and Solution, 3E4"

Transcription

1 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

2 Why solve linear algebraic equations? Consider the modelling of an isothermal batch reactor, where a reversible reaction A B is taking place Assumptions: Perfect mixing No change in volume due to the reaction Modelling goals: Predict the concentrations of species A and B in the reactor at steady state 2

3 Motivation and topics Problem formulation We consider linear algebraic equations of the general form: a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2 a n1 x 1 + a n2 x a nn x n = b n The a s and b s are constant coefficients The number of equations, n, is the same as the number of variables Equations that do not conform to the foregoing form are nonlinear! 3

4 Outline Mathematical background Gauss Elimination LU Decomposition and Matrix Inversion Gauss-Seidel Method 4

5 Matrix notation A is an m n matrix is a rectangular array of elements, a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn Row and column vectors are 1 n and m 1 matrices, respectively A matrix is said to be square when m = n Special types of square matrices: Diagonal matrix Upper triangular matrix Lower triangular matrix 2 D = 6 4 d d d nn U = 6 4 u 11 u 12 u 1n 0 u 22 u 2n 0 0 u nn L = 6 4 l l 21 l 22 0 l n1 l n2 l nn

6 Matrix operations Addition/subtraction: only possible between two m n matrices, 2 3 a 11 ± b 11 a 12 ± b 12 a 1n ± b 1n a 21 ± b 21 a 22 ± b 22 a 2n ± b 2n A ± B = a m1 ± b m1 a m2 ± b m2 a mn ± b mn Multiplication: only possible between m p and p n matrices, 2 3 px A B = a ik b kj k=1 5 Calculate the product of A = 4 B A? with B =» How about 6

7 Matrix operations (cont d) Inverse: A 1 is called the inverse of the square matrix A if A 1 A = A A 1 = I Theorem Invertible matrices can be characterized using the so-called determinant A has a unique inverse A 1 if and only if det A 0 (non-singular) Transpose: Transform rows into columns and vice-versa, a 11 a 12 a 1n a 11 a 21 a m1 a 21 a 22 a 2n A =, a 12 a 22 a m2 AT = a m1 a m2 a mn a 1n a 2n a mn 7

8 Solution of linear algebraic equations Matrices provide a concise notation for linear algebraic equations: a 11 a 12 a 1n b 1 a 21 a 22 a 2n A x = b, with: A =, b = b 2 a n1 a n2 a nn Theorem The system A x = b has a unique solution x if and only if det A 0 b n Non-singular case: Singular case: 8

9 Gauss elimination: foreword First appears in Chapter 8 of the Chinese mathematical test The Nine Chapters of the Mathematical Art, written as early as 150 BC! Invented independently by C F Gauss in his book Theory of Motion of Heavenly Bodies published in Principles and naive Gauss elimination 2 Pitfalls 3 Improved Gauss elimination: pivoting 4 Variant: Gauss-Jordan 9

10 The elimination of unknowns Consider the two linear equations: { a11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 Idea: Combine the equations algebraically to eliminate one of the unknowns 1 Multiplying eq 1 by a 21 and eq 2 by a 11 gives: { a21 a 11 x 1 + a 21 a 12 x 2 = a 21 b 1 a 11 a 21 x 1 + a 11 a 22 x 2 = a 11 b 2 2 Subtracting the resulting equations gives: from which one obtains: (a 21 a 12 a 11 a 22 ) x 2 = a 21 b 1 a 11 b 2, x 2 = a 21b 1 a 11 b 2 a 21 a 12 a 11 a 22, and x 1 = b 1 a 12 x 2 a 11 = a 22b 1 a 12 b 2 a 21 a 12 a 11 a 22 10

11 Gauss elimination: principles Extend and formalize the elimination of unknown approach to systems with more than 2 equations Two-step approach: (Case n = 3) 8 < : 8 < : 1 Forward elimination: convert A into upper triangular form 8 < a 11x 1 + a 12x 2 + a 13x 3 = b 1 a 21x 1 + a 22x 2 + a 23x 3 = b 2 a 31x 1 + a 32x 2 + a 33x 3 = b 3 a 11x 1 + a 12x 2 + a 13x 3 = b 1 a 22x 2 + a 23x 3 = b 2 a 33x 3 = b 3 2 Back substitution: solve upper triangular system 8 >< : a 11x 1 + a 12x 2 + a 13x 3 = b 1 a 22x 2 + a 23x 3 = b 2 a 33x 3 = b 3 >: x 3 = b 3 a 33 x 2 = b 2 a 23 x 3 a 22 x 1 = b 1 a 12 x 2 a 13 x 3 a 11 Main difficulty: How to formulate the upper triangular system? 11

12 Gauss elimination: procedure Step 1 of forward elimination: eliminate x 1 from rows 2,, n Initial system: a 11 x 1 + a 12 x 2 + a 13 x a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x a 2n x n = b 2 a n1 x 1 + a n2 x 2 + a n3 x a nn x n = b n 1st pivot equation Multiply row 1 by a 21 a 11 : a 11 0 the 1st pivot element a 21 x 1 + a 21 a 11 a 12 x a 21 a 11 a 1n x n = a 21 a 11 b 1 Subtract the result from row 2, ( a 22 a ) ( 21 a 12 x a 2n a ) 21 a 1n x n = b 2 a 21 b 1 a 11 a 11 a }{{}}{{}}{{ 11 } a 22 a 2n b 2 12

13 Gauss elimination: procedure Step 1 of forward elimination: eliminate x 1 from rows 2,, n Repeat same steps for rows 3,, n: Multiply row 1 by a k1 a 11 Subtract the result from row k a 11 x 1 + a 12 x 2 + a 13 x a 1n x n = b 1 a 22 x 2 + a 23 x a 2n x n = b 2 a 32 x 2 + a 33 x a 3n x n = b 3 a n2 x 2 + a n3 x a nnx n = b n 13

14 Gauss elimination: procedure (cont d) Step 2 of forward elimination: eliminate x 2 from rows 3,, n Modified subsystem: a 22 x 2 + a 23 x a 2n x n = b 2 a 32 x 2 + a 33 x a 3n x n = b 3 a n2 x 2 + a n3 x a nnx n = b n 2nd pivot equation Multiply row 2 by a 32 a 22 : a 22 0 the 2nd pivot element a 32x 2 + a 32 a 22 a 23x a 32 a 22 Subtract ( the result ) from row 3, ( ) a 33 a 32 a 22 a 23 x a 3n a 32 a a 2n 22 }{{}}{{} a 33 a 3n a 2nx n = a 32 a 22 b 2 x n = b 3 a 32 a 22 b 2 } {{ } b 3 14

15 Gauss elimination: procedure (cont d) Step 2 of forward elimination: eliminate x 2 from rows 3,, n Repeat same steps for rows 4,, n: Multiply row 2 by a k2 a 22 Subtract the result from row k a 11 x 1 + a 12 x 2 + a 13 x a 1n x n = b 1 a 22 x 2 + a 23 x a 2n x n = b 2 a 33 x a 3n x n = b 3 a n3 x a nnx n = b n 15

16 Gauss elimination: procedure (cont d) After step n 1 of forward elimination: Upper triangular system: a 11 x 1 + a 12 x 2 + a 13 x a 1n x n = b 1 a 22 x 2 + a 23 x a 2n x n = b 2 a 33 x a 3n x n = b 3 a nn (n 1) x n = b n (n 1) Back substitution: Solve row n for x n, then back-substitute the result into row (n 1) to solve for x n 1, and so forth x n = b(n 1) n a (n 1) nn ; x i = b (i 1) i n j=i+1 a (i 1) ii a (i 1) ij x j, for i = n 1,, 1 16

17 Gauss elimination: example Use Gauss elimination to solve the linear algebraic equations x + y + z + w = 0 x 2y + 2z 2w = 4 x + 4y 4z + w = 2 x 5y 5z 3w = 4 17

18 Gauss elimination: algorithm statement Forward elimination: Back substitution: Loop: row = 1:n-1 Loop: i = (row+1):n factor = a[i,row] / a[row,row] Loop: j = row:n # or use row+1:n a[i,j] = a[i,j] - factor * a[row,j] End Loop b[i] = b[i] - factor * b[row] End Loop print(a) End Loop x[n] = b[n] / a[n,n] Loop: row = n-1:1 steps of -1 sums = b[row] Loop: j = row+1, n sums = sums - a[row,j] * x[j] End Loop x[row] = sums / a[row,row] End Loop Counting floating-point operations (FLOPs): 2 3 n3 + O(n 2 ) 1 2 n2 + O(n) Addition/subtraction and multiplication/division considered the same Forward elimination dominates as n becomes large n = 10 3 : Gauss elimination FLOPs ( 1 CPU sec) n = 10 4 : Gauss elimination FLOPs ( 1, 000 CPU sec) 18

19 Gauss elimination: sensitivity to round-off error Consider the following linear algebraic equations: { x x 2 = x x 2 = The first pivot element is a 11 = 00003: close to zero! Application of Gauss elimination yields: x 2 = 2 3, x 1 = x 1 very sensitive to subtractive cancellation: Significant ɛ rel Figures x 2 x 1 [%]

20 Improved Gauss elimination: pivoting Consider the following linear algebraic equations: 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 + x 2 + 6x 3 = 5 The first pivot element is a 11 = 0 Application of naive Gauss elimination results in a division by zero! Similar problem if pivot element close to zero Workaround Permute the system rows/columns so that the pivot elements are always the largest elements: this is known as pivoting Partial Pivoting: Switch rows so that the largest element becomes the pivot element Complete Pivoting: Switch rows and columns: rarely used Pivoting also minimizes round-off error 20

21 Improved Gauss elimination: partial pivoting Step 1 of forward elimination: eliminate x 1 from (n 1) rows Initial System: a 11 x 1 + a 12 x 2 + a 13 x a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x a 2n x n = b 2 a n1 x 1 + a n2 x 2 + a n3 x a nn x n = b n Permute row 1 with row l 1 such that a l11 = max i {1,,n} a i1 Apply 1st step of forward elimination to permuted system: a l11x 1 + a l12x 2 + a l13x a l1nx n = b l1 a 22 x 2 + a 23 x a 2n x n = b 2 a 32 x 2 + a 33 x a 3n x n = b 3 a n2 x 2 + a n3 x a nnx n = b n 21

22 Improved Gauss elimination: partial pivoting (cont d) Step 2 of forward elimination: eliminate x 2 from (n 2) rows Apply same procedure as in Step 1 to the new subsystem: a 22 x 2 + a 23 x a 2n x n = b 2 a 32 x 2 + a 33 x a 3n x n = b 3 a n2 x 2 + a n3 x a nnx n = b n After step n 1 of forward elimination: Upper triangular system: a l11x 1 + a l12x 2 + a l13x a l1nx n = b l1 a l x a l x a l x 2n n = b l 2 a l x a l x 3n n = b l 3 a (n 1) l nn x n = b (n 1) l n 22

23 Improved Gauss elimination: example Use Gauss elimination with partial pivoting to solve the linear algebraic equations 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 x 2 + 6x 3 = 5 5x 1 + x 3 = 2 5x 2 3x 3 = 1 x 1 3x 2 + 2x 3 = 3 23

24 Variant: Gauss-Jordan method Main ideas: Pursue the elimination step until the identity matrix is obtained, rather than an upper triangular matrix Back substitution step no longer necessary, but FLOPs increased to n 3 + O(n 2 ) At each step: 1 Normalize the current row by dividing it by its pivot element 2 Eliminate an unknown from all other equations, not just the subsequent equations Use the Gauss-Jordan method to solve the linear algebraic equations 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 + x 2 + 6x 3 = 5 24

25 Gauss elimination: wrap-up Most fundamental methods for solving simultaneous linear algebraic equations Easy to automate by a computer program Extremely effective for solution of many engineering problems Solutions may be sensitive to round-off error and scaling Use partial pivoting strategy Make computations using extended precision Readings: Chapter 9 in: S C Chapra, and R P Canale, Numerical Methods for Engineers, McGraw Hill, 5th/6th Edition 25

26 Why do things differently? Idea: Modify the time-consuming elimination step so that it only involves operations on the coefficient matrix A Why? Provides an efficient means to solve equations with same coefficient matrix A, but many different right-hand-side vectors b Provides an efficient means to compute the matrix inverse A 1 Outline: 1 LU Decomposition 2 Matrix Inverse 3 Error Analysis and System Condition 26

27 LU decomposition for linear equation solving a 11 a 1n A x = b, with: A = a n1 a nn, b = Suppose A can be decomposed into the product L U where: l u 11 u 12 u 1n L = l 21 l 22 0, U = 0 u 22 u 2n, l n1 l n2 l nn 0 0 u nn b 1 b n Substitution steps: 1 Forward Substitution step: find y such that Ly = b 2 Back Substitution step: find x such that Ux = y Main difficulty: How to calculate a LU decomposition? 27

28 Performing LU decomposition Main variants: Doolittle decomposition: 1 s on the diagonal of L Crout decomposition: 1 s on the diagonal of U our focus The good news! Gauss elimination can be used to decompose A into L U Obtaining the upper triangular matrix U: a 11 a 1n A = a n1 a nn Gauss elimination U = a 11 a 12 a 1n a 22 a 2n a (n 1) nn with: a (k) ij = a (k 1) ij a(k 1) ik a (k 1) kk a (k 1) kj, i, j = k + 1,, n 28

29 Performing LU decomposition Main variants: Doolittle decomposition: 1 s on the diagonal of L Crout decomposition: 1 s on the diagonal of U our focus The good news! Gauss elimination can be used to decompose A into L U Obtaining the lower triangular matrix L: By application of the Gauss elimination procedure, we have: b 1 b 2 b n 1 = l 21 1 l n1 l n2 1 b 1 b 2 b (n 1) n why? 29

30 Performing LU decomposition (cont d) Obtaining the lower triangular matrix L: At the first step of Gauss elimination: b i = b i a i1 a 11 b 1, i 2 2 b b a b n 7 5 = 6 4 a 11 1 a n1 a b 1 b 2 b n 7 5 = a 21 a 11 a n1 a 11 At the second step of Gauss elimination: b i b 1 b 2 b n = a 21 a 11 1 a n1 a 11 a 32 a 22 a n2 a » b1 b b = b i a i2 a 22 b 2, i b 3 b n b 2 b n

31 Performing LU decomposition (cont d) Obtaining the lower triangular matrix L: And so on for n 1 steps: b 1 b 2 b n = a 21 a 11 1 a n1 a 11 a 32 a 22 a n2 a 22 1 a (n 2) n,n 1 a (n 2) n 1,n 1 {z } = L b 1 b 2 b (n 1) n

32 LU decomposition: procedure At each step k = 1,, n 1: 1 Calculate and store factor elements in kth column of L: l ik = a(k 1) ik a (k 1) kk, i = k + 1,, n 2 Perform Gauss elimination in kth column of A Remarks: a (k) ij = a (k 1) ij l ik a (k 1) kj, i, j = k + 1,, n Compact representation: store factor elements l ik in lower part of coefficient matrix Corresponding entries are converted to zeros anyway Pivoting: should be performed as in Gauss elimination to avoid division by zero and mitigate round-off errors (LU decomp) Keep track of pivoting by using an order vector p Complexity: number of FLOPs is 2 3 n3 + O(n 2 ) 32

33 LU decomposition for linear equation solving: example Perform an LU decomposition of the matrix A (without pivoting), then find the solution to the linear equations Ax = b, where: A = 1 3 1, and b =

34 LU decomposition: application to matrix inversion Idea: (do you see why?) Compute the inverse A 1 of a (nonsingular) square matrix A in a column-by-column fashion: The 1st column of A 1 can be obtained by solving the linear equations Ax = e 1, with e 1 = [ ] T The 2nd column of A 1 can be obtained by solving the linear equations Ax = e 2, with e 2 = [ ] T And so on Procedure: Perform LU decomposition of A only once Perform back- and forward-substitution steps n times, for each right-hand-side vector e 1,, e n Complexity: Number of FLOPs for matrix inverse is 4 3 n3 + O(n 2 ) 34

35 LU decomposition for matrix inversion: example Perform an LU decomposition of the matrix A, then calculate its inverse A 1, for: A =

36 Error analysis: did everything go well? Ways to check conditioning of the system Ax = b Multiply A 1 by A and assess whether the result is close to identity: A A 1? I Invert A 1 and assess whether the result is close to A: [A 1 ] 1? A Example: Hilbert matrix (MATLAB, single precision) 2 A A 1 = Goal: Obtain a single number as an indicator of ill-conditioning

37 Vector and matrix norms A norm is real-valued function that provides a measure of the size of vectors/matrices (Some) Vector norms: Euclidean norm x 2 = n k=1 x 2 k (Some) Matrix norms: Taxicab norm n x 1 = x k k=1 Uniform norm x = max 1 k n x k Frobenius norm n n A e = i=1 j=1 a 2 ij Column-sum norm n A 1 = max a ij 1 j n i=1 Row-sum norm A = max 1 i n n a ij j=1 2 Given A = , calculate A e, A 1 and A 37

38 Matrix condition number The condition number of a matrix A, denoted cond(a), is cond(a) = A A 1 cond(a) 1: The larger cond(a), the more ill-conditioned! For linear algebraic equations Ax = b, 1 b cond(a) b x x cond(a) b b Implications: If the a s are known to t-digit precision and cond(a) 10 c, then the solution x may only be valid up to t c digits! Example: Hilbert matrix (cont d) 2 A = /2 1/3 1/4 1/5 1/2 1/3 1/4 1/5 1/6 1/3 1/4 1/5 1/6 1/7 1/4 1/5 1/6 1/7 1/8 1/5 1/6 1/7 1/8 1/ , cond(a) = A 1 A

39 LU decomposition and matrix inversion: wrap-up Readings: Chapter 10 in: S C Chapra, and R P Canale, Numerical Methods for Engineers, McGraw Hill, 5th/6th Edition 39

40 Elimination vs iterative methods Ax = b, A IR n n, b IR n Elimination methods: (Gauss elimination, LU decomposition) Returns an estimate of the solution after a constant number of operations FLOPs = 2 3 n3 + O(n 2 ) Sensitive to round-off errors when cond(a) 1 Situation gets worse as n increases Iterative methods: (Gauss-Seidel, Jacobi) Starts with an initial guess that is successively refined to get closer to the solution An estimate of the solution is obtained upon convergence But, the method may not converge Well suited for large-scale systems of linear equations Number of FLOPs not known a priori 40

41 Jacobi and Gauss-Seidel methods: principles Idea of iterative methods: Create a sequence of estimates x (0), x (1),, x (k), converging to the solution x of the linear equations Ax = b (provided it exists) Notion of algorithms: x (k+1) = A (x (k) ), k 0; x (0) given 1 Need to pick-up an initial guess x (0) 2 Need to define the algorithm A 3 Need to specify a termination criterion Crucial questions: Does the algorithm converge? If so, how fast is the convergence? 41

42 Jacobi method: algorithm statement a 11 x 1 + a 12 x 2 + a 13 x a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x a 2n x n = b 2 a n1 x 1 + a n2 x 2 + a n3 x a nn x n = b n At a given iteration k 0: Consider the 1st equation: a 11 x 1 + a 12 x 2 + a 13 x a 1n x n = b 1 Principle: Use current estimates x (k) 2, x (k) 3,, x n (k) to calculate a new estimate x (k+1) 1, x (k+1) 1 = b 1 a 12 x (k) 2 a 13 x (k) 3 a 1n x n (k) a 11 a 11 a 11 a 11 Caution: The x 1 update requires a

43 Jacobi method: algorithm statement (cont d) Next, consider the 2nd equation: a 21 x 1 + a 22 x 2 + a 23 x a 2n x n = b 2 Principle: use current estimates x (k) 1, x (k) 3,, x n (k) to calculate a new estimate x (k+1) 2, x (k+1) 2 = b 1 a 21 x (k) 1 a 23 x (k) 3 a 2n x n (k) a 22 a 22 a 22 a 22 Caution: The x 2 update requires a 22 0 Jacobi method: update rule For each variable x i, i = 1,, n, x (k+1) i = 1 a ii b i n a ij x (k) j, a ii 0 j=1 j i using the previous estimates x (k) 1,, x n (k) 43

44 Jacobi method: initialization and termination Initialization: If a good guess for the solution is not known, x (0) 1,, x n (0) can be chosen randomly! (0) Eg, x 1 = = x n (0) = 0 The initial guess plays no role in whether convergence takes place If the procedure converges from one point, it converges from any starting point Some initial guesses will have fewer iterations than others Termination: Stop when the relative change in the iterates becomes small, x (k) i x (k 1) i x (k) < ɛrel i, for each i = 1,, n i with ɛ rel i a user-specified tolerance or, stop when the x (k) x (k 1) x (k) ratio of norms is below ɛ rel 44

45 Jacobi method: initialization Good initial guesses for m i? 45

46 Jacobi method: example Use the Jacobi method to solve the linear algebraic equations 2x 1 + 3x 2 2x 3 = 7 x 1 + 2x 2 + x 3 = 9 x 1 + x 2 + 2x 3 = 8 from the starting point x (0) 1 = x (0) 2 = x (0) 3 = 0, and with termination tolerances 1%, 01%, 001% Tol 1% 01% 001% # Iter x x x

47 Jacobi method: example (cont d) 45 4 x1 x2 x Points Iterations 47

48 Jacobi method: algorithm statement Initialization: maxit: maximum number of iterations tol: user-specified tolerance x[:,1]: initial guess for the solution Loop: k = 1, maxiter Loop: i = 1, n subs = 0 Loop: j = 1, n Test: i ~= j? subs = subs + a[i,j] * x[j,k] End Test End Loop x[i,k+1] = ( b[i] - subs ) / a[i,i] rel_diff = norm(x[:,k+1] - x[:,k]) / norm(x[:,k+1]) End Loop Test: rel_diff <= tol? Exit End Test End Loop 48

49 Gauss-Seidel method: a variant Idea of the Gauss-Seidel method: Update each element x i, always using the latest available values of the other elements x 1,, x i 1, x i+1,, x n Jacobi vs Gauss-Seidel Jacobi: Defer use of the new values until the next iteration Gauss-Seidel: Use each new value immediately in the next equation Gauss-Seidel method: update rule For each variable x i, i = 1,, n, x (k+1) i = 1 i 1 b i a ij x (k+1) j a ii j=1 n j=i+1 a ij x (k), a ii 0, using the latest estimates x (k+1) 1,, x (k+1) i 1 and x (k) i+1,, x n (k) Using the best available estimates usually accelerates convergence Typically, 2 faster compared with the Jacobi method 49 j

50 Gauss-Seidel method: example 1 Use the Gauss-Seidel method to solve the linear algebraic equations 2x 1 + 3x 2 2x 3 = 7 x 1 + 2x 2 + x 3 = 9 x 1 + x 2 + 2x 3 = 8 from the starting point x (0) 1 = x (0) 2 = x (0) 3 = 0, and with termination tolerances 1%, 01%, 001% Tol 1% 01% 001% # Iter x x x

51 Gauss-Seidel method: example 1 (cont d) 4 35 x1 x2 x Points Iterations 51

52 Gauss-Seidel method: example 2 Use the Gauss-Seidel method to solve the linear algebraic equations 2x 1 3x 2 2x 3 = 7 x 1 + 2x 2 + x 3 = 9 x 1 + x 2 + 2x 3 = x1 x2 x3 50 Points Iterations 52

53 Jacobi and Gauss-Seidel methods: convergence Difficulty: convergence of iterative methods is not guaranteed! Sufficient condition for convergence: diagonal dominance 1 In each row, the magnitude of the diagonal entry is larger than or equal to the sum of the magnitudes of all the off-diagonal entries: n a ii a ij, for all i = 1,, n j=1 j i 2 In at least one row, the magnitude of the diagonal entry is strictly larger than the sum of the magnitudes of all the off-diagonal entries: a ii > n a ij, for at least one i = 1,, n j=1 j i 53

54 Jacobi and Gauss-Seidel methods: convergence (cont d) Which of the following linear algebraic systems Ax = b are diagonally dominant: A = 1 3 2, A = 1 3 2, A = Important remarks: 1 The diagonal dominance condition is rather conservative, yet easy to check 2 An iterative method may or may not convergence for a non-diagonally dominant system 3 Diagonal dominance can be obtained by simply pivoting the rows of the system 4 Pivoting the columns can also help, although this modifies the order in the solution vector x 54

55 Jacobi and Gauss-Seidel methods: relaxation Idea: Reduce the adaptation step in order for the algorithm to be less aggressive How? Mix a certain fraction of the full-step Jacobi/Gauss-Seidel with the previous estimate x (k+1) i = ωx (k+1) i ω = 1: Full-step Jacobi/Gauss-Seidel + (1 ω)x (k) i 0 < ω < 1: Under-relaxation achieves more reliable convergence 1 < ω 2: Over-relaxation speeds-up convergence ω > 2: Always diverges Relaxed Gauss-Seidel method: update rule For each variable x i, i = 1,, n, x (k+1) i = (1 ω)x (k) i + ω i 1 b i a ij x (k+1) j a ii j=1 n j=i+1 a ij x (k), a ii 0, j 55

56 Relaxed Gauss-Seidel method: example Use the relaxed Gauss-Seidel method to solve the linear equations 2x 1 3x 2 2x 3 = 7 x 1 + 2x 2 + x 3 = 9 x 1 + x 2 + 2x 3 = 8 from the starting point x (0) 1 = x (0) 2 = x (0) 3 = 0 and with ω = Points 4 3 x1 x2 x Iterations 56

57 The final words Alternative methods for linear equation solving: Method Stability Accuracy Gauss Elimination Affected by General (with partial pivoting) round-off error LU Decomposition General Affected by round-off error Gauss-Seidel May diverge Excellent General General Breadth of Application Diagonally dominant systems Readings: Chapter 112 in: S C Chapra, and R P Canale, Numerical Methods for Engineers, McGraw Hill, 5th/6th Edition 57

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

The purpose of computing is insight, not numbers. Richard Wesley Hamming

The purpose of computing is insight, not numbers. Richard Wesley Hamming Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Chapter 8 Gauss Elimination. Gab-Byung Chae

Chapter 8 Gauss Elimination. Gab-Byung Chae Chapter 8 Gauss Elimination Gab-Byung Chae 2008 5 19 2 Chapter Objectives How to solve small sets of linear equations with the graphical method and Cramer s rule Gauss Elimination Understanding how to

More information

Numerical Analysis Fall. Gauss Elimination

Numerical Analysis Fall. Gauss Elimination Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Direct Methods for solving Linear Equation Systems

Direct Methods for solving Linear Equation Systems REVIEW Lecture 5: Systems of Linear Equations Spring 2015 Lecture 6 Direct Methods for solving Linear Equation Systems Determinants and Cramer s Rule Gauss Elimination Algorithm Forward Elimination/Reduction

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Chapter 9: Gaussian Elimination

Chapter 9: Gaussian Elimination Uchechukwu Ofoegbu Temple University Chapter 9: Gaussian Elimination Graphical Method The solution of a small set of simultaneous equations, can be obtained by graphing them and determining the location

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations 1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7 Numerical Fluid Mechanics Fall 2011 Lecture 7 REVIEW of Lecture 6 Material covered in class: Differential forms of conservation laws Material Derivative (substantial/total derivative) Conservation of Mass

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

NUMERICAL MATHEMATICS & COMPUTING 7th Edition NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid

More information

3.2 Iterative Solution Methods for Solving Linear

3.2 Iterative Solution Methods for Solving Linear 22 CHAPTER 3. NUMERICAL LINEAR ALGEBRA 3.2 Iterative Solution Methods for Solving Linear Systems 3.2.1 Introduction We continue looking how to solve linear systems of the form Ax = b where A = (a ij is

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Lecture 5. Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 UVT. Lecture 5. Linear Systems. show Practical Problem.

Lecture 5. Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 UVT. Lecture 5. Linear Systems. show Practical Problem. Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 FIGURE A S UVT Agenda of today s lecture 1 Linear Systems. 2 Gauss Elimination FIGURE A S 1.2 We ve got a problem 210

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

The following steps will help you to record your work and save and submit it successfully.

The following steps will help you to record your work and save and submit it successfully. MATH 22AL Lab # 4 1 Objectives In this LAB you will explore the following topics using MATLAB. Properties of invertible matrices. Inverse of a Matrix Explore LU Factorization 2 Recording and submitting

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Iterative Solvers. Lab 6. Iterative Methods

Iterative Solvers. Lab 6. Iterative Methods Lab 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 Matrix notation A nm : n m : size of the matrix m : no of columns, n: no of rows Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 n = m square matrix Symmetric matrix Upper triangular matrix: matrix

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

CPE 310: Numerical Analysis for Engineers

CPE 310: Numerical Analysis for Engineers CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations Motivation Solving Systems of Linear Equations The idea behind Googles pagerank An example from economics Gaussian elimination LU Decomposition Iterative Methods The Jacobi method Summary Motivation Systems

More information

Engineering Computation

Engineering Computation Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Last time, we found that solving equations such as Poisson s equation or Laplace s equation on a grid is equivalent to solving a system of linear equations. There are many other

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

MA3232 Numerical Analysis Week 9. James Cooley (1926-) MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier

More information