Lecture 5. Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 UVT. Lecture 5. Linear Systems. show Practical Problem.

Size: px
Start display at page:

Download "Lecture 5. Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 UVT. Lecture 5. Linear Systems. show Practical Problem."

Transcription

1 Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 FIGURE A S UVT

2 Agenda of today s lecture 1 Linear Systems. 2 Gauss Elimination FIGURE A S 1.2

3 We ve got a problem 210 LINEAR ALGEBRAIC EQUATIONS AND MATRICES 3 bungee jumpers connected by bungee cords. After they are released, ity takes hold we have (b). Compute the displacement of each e jumpers. cord linear spring x 1 0 x 2 0 x 3 0 (a) Unstretched (b) Stretched FIGURE 8.1 Three individuals connected by bungee cords. FIGURE k 1 x 1 A S m 1 g k 2 (x 2 his o FIGURE 8.2 Free-body dia 1.3 Using Newton s second law, force balances can

4 dt 2 We ve d 2 x 2 m 2 got a problem 0 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) d 2 x 3 m 3 = m dt 2 3 g + k 3 (x 2 x 3 ) 0 re Use m i Newton s = the of second jumper law i (kg), + t Hooke s = time (s), law: k j = the spring constant for cord j (N/m), x i = the displacement of jumper i measured k 1 x 1 downward k 2 (x 2 from x 1 ) quilibrium k 3 (x 3 x 2 ) position (m), g = itational acceleration (9.81 m/s 2 ). Because we are interested in the steady-state 0 solution, the second derivatives can be set to zero. Collecting terms gives (k 1 + k 2 )x 1 k 2 x 2 = m 1 g k 2 x 1 + (k 2 + k 3 )x 2 k 3 x 3 = m 2 g k 3 x 2 + k 3 x 3 = m 3 g Thus, the problem reduces to solving a m ree simultaneous equations for the three unknown displacements. Because we have used a linear law for the cords, these (a) Unstretched (b) Stretched m equations are linear algebraic equations. Chapters 8 1 g k through 2 (x 2 x 1 ) m 12 2 g k introduce 3 (x 3 x 2 ) m you to 3 g how MATLAB is used to solve such ms of equations. URE 8.1 FIGURE 8.2 e individuals connected by bungee cords. Free-body diagrams. Using Newton s second law, force balances can be written for each jumper: m 1 d 2 x 1 dt 2 = m 1 g + k 2 (x 2 x 1 ) k 1 x 1 (8.2) m 2 d 2 x 2 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) FIGURE A S m 3 d 2 x 3 = m 3 g + k 3 (x 2 x 3 ) 1.4

5 dt 2 We ve d 2 x 2 m 2 got a problem 0 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) d 2 x 3 m 3 = m dt 2 3 g + k 3 (x 2 x 3 ) 0 re Use m i Newton s = the of second jumper law i (kg), + t Hooke s = time (s), law: k j = the spring constant for cord j (N/m), x i = the displacement of jumper i measured k 1 x 1 downward k 2 (x 2 from x 1 ) quilibrium k 3 (x 3 x 2 ) position (m), g = itational acceleration (9.81 m/s 2 ). Because we are interested in the steady-state 0 solution, the second derivatives can be set to zero. Collecting terms gives (k 1 + k 2 )x 1 k 2 x 2 = m 1 g k 2 x 1 + (k 2 + k 3 )x 2 k 3 x 3 = m 2 g k 3 x 2 + k 3 x 3 = m 3 g Thus, the problem reduces to solving a m ree simultaneous equations for the three unknown displacements. Because we have used a linear law for the cords, these (a) Unstretched (b) Stretched m equations are linear algebraic equations. Chapters 8 1 g k through 2 (x 2 x 1 ) m 12 2 g k introduce 3 (x 3 x 2 ) m you to 3 g how MATLAB We can is used write to solve thissuch as ms of equations. URE 8.1 FIGURE 8.2 Ax = b. e individuals connected by bungee cords. Free-body diagrams. Using Newton s second law, force balances can be written for each jumper: m 1 d 2 x 1 dt 2 = m 1 g + k 2 (x 2 x 1 ) k 1 x 1 (8.2) m 2 d 2 x 2 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) FIGURE A S m 3 d 2 x 3 = m 3 g + k 3 (x 2 x 3 ) 1.4

6 dt 2 We ve d 2 x 2 m 2 got a problem 0 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) d 2 x 3 m 3 = m dt 2 3 g + k 3 (x 2 x 3 ) 0 re Use m i Newton s = the of second jumper law i (kg), + t Hooke s = time (s), law: k j = the spring constant for cord j (N/m), x i = the displacement of jumper i measured k 1 x 1 downward k 2 (x 2 from x 1 ) quilibrium k 3 (x 3 x 2 ) position (m), g = itational acceleration (9.81 m/s 2 ). Because we are interested in the steady-state 0 solution, the second derivatives can be set to zero. Collecting terms gives (k 1 + k 2 )x 1 k 2 x 2 = m 1 g k 2 x 1 + (k 2 + k 3 )x 2 k 3 x 3 = m 2 g k 3 x 2 + k 3 x 3 = m 3 g Thus, the problem reduces to solving a m ree simultaneous equations for the three unknown displacements. Because we have used a linear law for the cords, these (a) Unstretched (b) Stretched m equations are linear algebraic equations. Chapters 8 1 g k through 2 (x 2 x 1 ) m 12 2 g k introduce 3 (x 3 x 2 ) m you to 3 g how MATLAB We can is used write to solve thissuch as ms of equations. URE 8.1 FIGURE 8.2 Ax = b. e individuals connected by bungee cords. Free-body diagrams. How do we solve such problems? Using Newton s second law, force balances can be written for each jumper: m 1 d 2 x 1 dt 2 = m 1 g + k 2 (x 2 x 1 ) k 1 x 1 (8.2) m 2 d 2 x 2 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) FIGURE A S m 3 d 2 x 3 = m 3 g + k 3 (x 2 x 3 ) 1.4

7 For few equations FIGURE A S 1.5

8 For few equations substitution FIGURE A S 1.5

9 For few equations substitution elimination of variables FIGURE A S 1.5

10 For few equations substitution elimination of variables Cramer s rule FIGURE A S 1.5

11 Summing up equations manipulated so as to result in one equation with one unknown FIGURE A S 1.6

12 Summing up equations manipulated so as to result in one equation with one unknown consequently: equation solved directly & result substituted back into original equations to solve for the rining FIGURE A S 1.6

13 Summing up equations manipulated so as to result in one equation with one unknown consequently: equation solved directly & result substituted back into original equations to solve for the rining This is the basic for an algorithm that works for large ms: FIGURE A S 1.6

14 Summing up equations manipulated so as to result in one equation with one unknown consequently: equation solved directly & result substituted back into original equations to solve for the rining This is the basic for an algorithm that works for large ms: eliminate unknowns FIGURE A S 1.6

15 Summing up equations manipulated so as to result in one equation with one unknown consequently: equation solved directly & result substituted back into original equations to solve for the rining This is the basic for an algorithm that works for large ms: eliminate unknowns substitute back FIGURE A S 1.6

16 is the most basic ese schemes. This section includes the matic techniques for forward elimination b stitution that comprise Gauss elimination. Although these techniques are ideally su implementation on computers, some modifications be required to obtain a algorithm. In particular, the computer program must avoid division by zero. The ing method is called naive Gauss elimination because it does not avoid this p Section 9.3 deal with the additional features required for an effective c program. Naive because it does not avoid division by zero! The approach is designed to solve a gen set of n equations: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 + +a 2n x n = b 2.. a n1 x 1 + a n2 x 2 + a n3 x 3 + +a nn x n = b n FIGURE A S 1.7

17 a n3 x 3 + +a nn x n = b n re the double prime indicates that lements have been modified twice. The procedure can be continued using the rining pivot equations. The f Elimination manipulation of Unknowns: in the sequence First reduce is to use the ms (n 1)th matrix equation to eliminate the x n 1 t an upper triangular from the nth one. equation. At this point, the m have been transformed to an u triangular m: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1n x n = b 1 (9. FIGURE A S a 22 x 2 + a 23 x 3 + +a 2n x n = b 2 (9. a 33 x 3 + +a 3n x n = b 3 (9.... a nn (n 1) x n = b n (n 1) (9. SubstituteBack back: Substitution. Equation (9.11d) can now be solved for x n : x n = b(n 1) n j=i+1 (9 x n = b(n 1) n a (n 1) b (i 1) nn i n a (i 1) ij x j j=i+1, x This result can be back-substituted into the (n 1)th equation to solve for x n 1. The a (n 1) i =, i = 1, n 1. nn a (i 1) cedure, which is repeated ii to evaluate the rining x s, can be represented by following formula: n b (i 1) i a (i 1) ij x j 1.8

18 Example Use Gauss elimination to solve: 3x 1 0.1x 2 0.2x 3 = x 1 + 7x 2 0.3x 3 = x 1 0.2x x 3 = 71.4 FIGURE A S 1.9

19 Example Use Gauss elimination to solve: 3x 1 0.1x 2 0.2x 3 = x 1 + 7x 2 0.3x 3 = x 1 0.2x x 3 = 71.4 Exact solution: x 1 = 3, x 2 = 2.5, x 3 = 7. FIGURE A S 1.9

20 Counting the operations Execution time depends on floating-point operations (or flops). Performing addition/subtraction multiplication/division is about the same. Summing up the ber of flops important in understing FIGURE A S 0

21 Counting the operations Execution time depends on floating-point operations (or flops). Performing addition/subtraction multiplication/division is about the same. Summing up the ber of flops important in understing which are the most time-consuming parts FIGURE A S 0

22 Counting the operations Execution time depends on floating-point operations (or flops). Performing addition/subtraction multiplication/division is about the same. Summing up the ber of flops important in understing which are the most time-consuming parts how computation time increases as ms get larger FIGURE A S 0

23 [135]:= Quit ; In[1]:= ut[2]= Gauss elimination algorithm SquareMatrixQ MatrixQ && Equal Dimensions &; SquareMatrixQ x True [16]:= Gauss A_?SquareMatrixQ, b_? ArrayQ : Module nb, Aug, n, factor, aux, x, If Length A Length b, Print "Error: dimensions of A b do not correspond" ; Abort ; n Length A ; nb n 1; Aug MapThread Append, A, b ; or Join A,Transpose b,2 For k 1, k n 1, k, For i k 1, i n, i, factor Aug i, k Aug k, k ; Aug i, k ;; nb Aug i, k ;; nb factor Aug k, k ;; nb ; Aug n, nb x Aug n, n ; For i n 1, i 1, i, Aug i, nb Aug i, i 1 ;; n.x aux ; x Prepend x, aux ; ; Aug i, i x [10]:= A 3, 0.1, 0.2, 0.1, 7, 0.3, 0.3, -0.2, 10 ut[10]= 3, 0.1, 0.2, 0.1, 7, 0.3, 0.3, 0.2, 10 [14]:= b 19.3, 71.4 ; FIGURE A S 1

24 Naive Gauss: counting operations Outer loop: n 1 iterations FIGURE A S 2

25 Naive Gauss: counting operations Outer loop: n 1 iterations Inner loop: one division; FIGURE A S 2

26 Naive Gauss: counting operations Outer loop: n 1 iterations Inner loop: one division; for every column element (from 2 to nb) - one subtraction & one multiplication FIGURE A S 2

27 Naive Gauss: counting operations Outer loop: n 1 iterations Inner loop: one division; for every column element (from 2 to nb) - one subtraction & one multiplication total n subtractions & n + 1 multiplication/division FIGURE A S 2

28 Therefore, the limits on the inner loop are from i = 2 to n. According to Eq. (9.14d), this means that the ber of iterations e inner loop be Naive n Gauss: counting operations 1 = n = n 1 (9.15) i=2 For Outer every one loop: ese n iterations, 1 iterations there is one division to calculate the factor. The next line then Inner performs loop: a multiplication a subtraction for each column element from 2 to nb. Because nb = n + 1, going from 2 to nb results in n multiplications n subtractions. Together with one the division; single division, this amounts to n + 1 multiplications/divisions n addition/subtractions for every column for every element iteration of (from the inner 2 to loop. nb) The - one total for subtraction the first pass through the outer loop is therefore (n 1)(n + 1) multiplication/divisions (n 1)(n) addition/subtractions. & one multiplication total Similar nreasoning subtractions can be used & n to + estimate 1 multiplication/division the flops for the subsequent iterations e outer loop. These can be summarized as Outer Loop Inner Loop Addition/Subtraction Multiplication/Division k i Flops Flops 1 2, n (n 1)(n) (n 1)(n + 1) 2 3, n (n 2)(n 1) (n 2)(n).. k k + 1, n (n k)(n + 1 k) (n k)(n + 2 k).. n 1 n, n (1)(2) (1)(3) Therefore, the total addition/subtraction flops for elimination can be computed as n 1 n 1 (n k)(n + 1 k) = [n(n + 1) k(2n + 1) + k 2 ] (9.16) k=1 k=1 FIGURE A S 2

29 addition/subtractions. Naive Similar Gauss: reasoning can counting be used to operations estimate the flops for the subsequent iterations e outer loop. These can be summarized as Outer Loop Inner Loop Addition/Subtraction Multiplication/Division k i Flops Flops 1 2, n (n 1)(n) (n 1)(n + 1) 2 3, n (n 2)(n 1) (n 2)(n).. k k + 1, n (n k)(n + 1 k) (n k)(n + 2 k).. n 1 n, n (1)(2) (1)(3) Therefore, the total addition/subtraction flops for elimination can be computed as n 1 n 1 Total addition/subtraction flops for elimination: (n k)(n + 1 k) = [n(n + 1) k(2n + 1) + k 2 ] (9.16) k=1 k=1 n 1 (n k)(n + 1 k) = n3 3 + O(n). k=1 For multiplication/division flops: n O(n2 ). Rrk: As n gets large, the lower terms become negligible! FIGURE A S 3

30 Thus, the total ber of flops is equal to 2n /3 plus an additional comp portional Naive Gauss: to terms counting of order operations n 2 lower. The result is written in this way becaus large, the O(n 2 ) lower terms become negligible. We are therefore justified in ing that for large n, ffort involved in forward elimination converges on 2n 3 For Because elimination: only a single loop is used, back substitution is much simpler to eva ber of addition/subtraction 2n flops is equal to n(n 1)/2. Because of xtr O(n2 ). prior to the loop, the ber of multiplication/division flops is n(n + 1)/2. Th added For substituting to arrive at back, a total total of ber of flops: n 2 + O(n) n 2 + O(n). Thus, Total ffort total ineffort naive in Gauss naive Gauss elimination: elimination can be represented as 2n O(n2 ) }{{} Forward elimination + n 2 + O(n) } {{ } Back substitution as n increases 2n3 3 + O(n2 ) Two useful gen conclusions can be drawn from this analysis: FIGURE A S 1. As the m gets larger, the computation time increases greatly. As in Tab amount of flops increases nearly three orders of magnitude for every order 4

31 Naive Gauss: counting operations Conclusions: as the m gets larger, the computation time increases greatly the amount of flops increases nearly three orders of magnitude for every order of magnitude increase in the ber of equations; FIGURE A S 5

32 Naive Gauss: counting operations Conclusions: as the m gets larger, the computation time increases greatly the amount of flops increases nearly three orders of magnitude for every order of magnitude increase in the ber of equations; most of ffort is incurred in limination step efforts to make the method more efficient should probably focus on this step. FIGURE A S 5

33 How does work for 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 3x 2 + 6x 3 = 5? FIGURE A S 6

34 How does work for 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 3x 2 + 6x 3 = 5? The first step would involve a division by the pivot a 11 = 0. FIGURE A S 6

35 How does work for 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 3x 2 + 6x 3 = 5? The first step would involve a division by the pivot a 11 = 0. Rrk: problems arise also n the pivot element (the element in Gauss elimination that we re dividing by) is close to zero roundoff errors introduced! FIGURE A S 6

36 Partial complete pivoting Solution: before each row is worked, determine the coefficient with the largest absolute value in the column below the pivot element; switch rows such that the largest element is the pivot element. This method is called partial pivoting. FIGURE A S 7

37 Partial complete pivoting Solution: before each row is worked, determine the coefficient with the largest absolute value in the column below the pivot element; switch rows such that the largest element is the pivot element. This method is called partial pivoting. If both columns rows are searched for the largest element, then switched: complete pivoting FIGURE A S 7

38 Partial complete pivoting Solution: before each row is worked, determine the coefficient with the largest absolute value in the column below the pivot element; switch rows such that the largest element is the pivot element. This method is called partial pivoting. If both columns rows are searched for the largest element, then switched: complete pivoting rarely used because most significant improvement comes from the partial pivoting. FIGURE A S 7

39 An example Use Gauss elimination to solve x x 2 = x x 2 = Repeat the computation with partial pivoting. The exact solution is x 1 = 1/3, x 2 = 2/3. FIGURE A S 8

40 ue Anto example subtractive cancellation, the result is very sensitive to the ber of significant ote With how naive the solution Gauss for elimination x gures carried in the computation: 1 is highly dependent on the ber of significant figures. This because in Eq. (E9.4.1), we are subtracting two almost-equal bers. On the other h, if quations are solved in reverse Absolute order, Value the row of with the larger ivot Significant element is normalized. The equations are Percent Relative Figures x 2 x 1 Error for x x x 2 = x x 2 = limination 5 substitution again yields x 2 = 2/3. For different 10 bers of significant gures, x6 1 can be computed from the first equation, as in x 1 = 1 (2/3) 1 te With case how is partial the much solution pivoting less for sensitive x 1 is highly to the dependent ber of on significant the ber figures of significant in the computation: figures. This because in Eq. (E9.4.1), we are subtracting two almost-equal bers. On the other h, if quations are solved in reverse Absolute order, Value the row of with the larger ivot Significant element is normalized. The equations are Percent Relative Figures x x x 2 = 2 x Error for x x x = limination 5 substitution again yields x 2 = 2/3. For different 0.001bers of significant gures, x 1 6can be computed from the first equation, as in x 1 = 1 (2/3) FIGURE A S 9

41 with Gauss elimination For a triangular matrix (matrix having zeros either above or below the main diagonal) det A = a 11 a 22 a a nn. FIGURE A S 1.20

42 with Gauss elimination For a triangular matrix (matrix having zeros either above or below the main diagonal) For : det A = a 11 a 22 a a nn. det A = a 11 a (1) 22 a(2) 33 a(n 1) nn. FIGURE A S 1.20

43 with Gauss elimination For a triangular matrix (matrix having zeros either above or below the main diagonal) For : det A = a 11 a 22 a a nn. det A = a 11 a (1) 22 a(2) 33 a(n 1) nn. For Gauss elimination with partial pivoting: det A = ( 1) p a 11 a (1) 22 a(2) 33 a(n 1) nn, p - the ber of times that rows are pivoted. FIGURE A S 1.20

Chapter 8 Gauss Elimination. Gab-Byung Chae

Chapter 8 Gauss Elimination. Gab-Byung Chae Chapter 8 Gauss Elimination Gab-Byung Chae 2008 5 19 2 Chapter Objectives How to solve small sets of linear equations with the graphical method and Cramer s rule Gauss Elimination Understanding how to

More information

Numerical Analysis Fall. Gauss Elimination

Numerical Analysis Fall. Gauss Elimination Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing

More information

Chapter 9: Gaussian Elimination

Chapter 9: Gaussian Elimination Uchechukwu Ofoegbu Temple University Chapter 9: Gaussian Elimination Graphical Method The solution of a small set of simultaneous equations, can be obtained by graphing them and determining the location

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving Linear Systems Using Gaussian Elimination. How can we solve Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Gauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Gauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan Gauss Elimination Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan chanhl@mail.cgu.edu.tw Solving small numbers of equations by graphical method The location of the intercept provides

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Engineering Computation

Engineering Computation Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian

More information

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 Matrix notation A nm : n m : size of the matrix m : no of columns, n: no of rows Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 n = m square matrix Symmetric matrix Upper triangular matrix: matrix

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n... .. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite

More information

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Gaussian Elimination -(3.1) b 1. b 2., b. b n Gaussian Elimination -() Consider solving a given system of n linear equations in n unknowns: (*) a x a x a n x n b where a ij and b i are constants and x i are unknowns Let a n x a n x a nn x n a a a

More information

Numerical Methods for Chemical Engineers

Numerical Methods for Chemical Engineers Numerical Methods for Chemical Engineers Chapter 3: System of Linear Algebraic Equation Morteza Esfandyari Email: Esfandyari.morteza@yahoo.com Mesfandyari.mihanblog.com Page 4-1 System of Linear Algebraic

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

CE 206: Engineering Computation Sessional. System of Linear Equations

CE 206: Engineering Computation Sessional. System of Linear Equations CE 6: Engineering Computation Sessional System of Linear Equations Gauss Elimination orward elimination Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column Math 20F Linear Algebra Lecture 18 1 Determinants, n n Review: The 3 3 case Slide 1 Determinants n n (Expansions by rows and columns Relation with Gauss elimination matrices: Properties) Formula for the

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Lecture 9: Elementary Matrices

Lecture 9: Elementary Matrices Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Exercise Sketch these lines and find their intersection.

Exercise Sketch these lines and find their intersection. These are brief notes for the lecture on Friday August 21, 2009: they are not complete, but they are a guide to what I want to say today. They are not guaranteed to be correct. 1. Solving systems of linear

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying

Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying Objectives Understanding forward elimination and back substitution in Gaussian elimination method Understanding the concept

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

POLI270 - Linear Algebra

POLI270 - Linear Algebra POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Solving Consistent Linear Systems

Solving Consistent Linear Systems Solving Consistent Linear Systems Matrix Notation An augmented matrix of a system consists of the coefficient matrix with an added column containing the constants from the right sides of the equations.

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Solution of Linear systems

Solution of Linear systems Solution of Linear systems Direct Methods Indirect Methods -Elimination Methods -Inverse of a matrix -Cramer s Rule -LU Decomposition Iterative Methods 2 A x = y Works better for coefficient matrices with

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,

More information

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

22A-2 SUMMER 2014 LECTURE 5

22A-2 SUMMER 2014 LECTURE 5 A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system

More information

Direct Methods for solving Linear Equation Systems

Direct Methods for solving Linear Equation Systems REVIEW Lecture 5: Systems of Linear Equations Spring 2015 Lecture 6 Direct Methods for solving Linear Equation Systems Determinants and Cramer s Rule Gauss Elimination Algorithm Forward Elimination/Reduction

More information

Math 304 (Spring 2010) - Lecture 2

Math 304 (Spring 2010) - Lecture 2 Math 304 (Spring 010) - Lecture Emre Mengi Department of Mathematics Koç University emengi@ku.edu.tr Lecture - Floating Point Operation Count p.1/10 Efficiency of an algorithm is determined by the total

More information

Linear equations The first case of a linear equation you learn is in one variable, for instance:

Linear equations The first case of a linear equation you learn is in one variable, for instance: Math 52 0 - Linear algebra, Spring Semester 2012-2013 Dan Abramovich Linear equations The first case of a linear equation you learn is in one variable, for instance: 2x = 5. We learned in school that this

More information

Lectures on Linear Algebra for IT

Lectures on Linear Algebra for IT Lectures on Linear Algebra for IT by Mgr. Tereza Kovářová, Ph.D. following content of lectures by Ing. Petr Beremlijski, Ph.D. Department of Applied Mathematics, VSB - TU Ostrava Czech Republic 2. Systems

More information

Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras

Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 46 Tri-diagonal Matrix Algorithm: Derivation In the last

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations pages 58-62 are a repeat of matrix notes. New material begins on page 63. Matrix operations: Mathcad

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that

More information

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7 Numerical Fluid Mechanics Fall 2011 Lecture 7 REVIEW of Lecture 6 Material covered in class: Differential forms of conservation laws Material Derivative (substantial/total derivative) Conservation of Mass

More information

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix Algebra & Trig. I 8.1 Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural

More information

The purpose of computing is insight, not numbers. Richard Wesley Hamming

The purpose of computing is insight, not numbers. Richard Wesley Hamming Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Solution of Matrix Eigenvalue Problem

Solution of Matrix Eigenvalue Problem Outlines October 12, 2004 Outlines Part I: Review of Previous Lecture Part II: Review of Previous Lecture Outlines Part I: Review of Previous Lecture Part II: Standard Matrix Eigenvalue Problem Other Forms

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

NUMERICAL MATHEMATICS & COMPUTING 7th Edition NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Module No. # 05 Lecture No. # 24 Gauss-Jordan method L U decomposition method

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

LECTURE 12: SOLUTIONS TO SIMULTANEOUS LINEAR EQUATIONS. Prof. N. Harnew University of Oxford MT 2012

LECTURE 12: SOLUTIONS TO SIMULTANEOUS LINEAR EQUATIONS. Prof. N. Harnew University of Oxford MT 2012 LECTURE 12: SOLUTIONS TO SIMULTANEOUS LINEAR EQUATIONS Prof. N. Harnew University of Oxford MT 2012 1 Outline: 12. SOLUTIONS TO SIMULTANEOUS LINEAR EQUATIONS 12.1 Methods used to solve for unique solution

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Last time, we found that solving equations such as Poisson s equation or Laplace s equation on a grid is equivalent to solving a system of linear equations. There are many other

More information