Lecture 5. Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 UVT. Lecture 5. Linear Systems. show Practical Problem.

Similar documents
Chapter 8 Gauss Elimination. Gab-Byung Chae

Numerical Analysis Fall. Gauss Elimination

Chapter 9: Gaussian Elimination

Process Model Formulation and Solution, 3E4

Solving Linear Systems Using Gaussian Elimination. How can we solve

2.1 Gaussian Elimination

Linear Algebraic Equations

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

MATH 3511 Lecture 1. Solving Linear Systems 1

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Gauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Matrix decompositions

1.Chapter Objectives

Linear System of Equations

Engineering Computation

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Linear Algebraic Equations

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Numerical Methods for Chemical Engineers

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CE 206: Engineering Computation Sessional. System of Linear Equations

Numerical Solution Techniques in Mechanical and Aerospace Engineering

CSE 160 Lecture 13. Numerical Linear Algebra

Solving Linear Systems of Equations

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Next topics: Solving systems of linear equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

9. Numerical linear algebra background

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Lecture 9: Elementary Matrices

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Exercise Sketch these lines and find their intersection.

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Numerical Methods Lecture 2 Simultaneous Equations

A Review of Matrix Analysis

Scientific Computing: Dense Linear Systems

Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying

Linear Systems of n equations for n unknowns

POLI270 - Linear Algebra

Matrix decompositions

9. Numerical linear algebra background

Solving Consistent Linear Systems

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

The Solution of Linear Systems AX = B

Solution of Linear systems

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

The QR Factorization

Gaussian Elimination and Back Substitution

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

CHAPTER 6. Direct Methods for Solving Linear Systems

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Numerical Linear Algebra

SOLVING LINEAR SYSTEMS

Computational Methods. Systems of Linear Equations

MTH 464: Computational Linear Algebra

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

22A-2 SUMMER 2014 LECTURE 5

Direct Methods for solving Linear Equation Systems

Math 304 (Spring 2010) - Lecture 2

Linear equations The first case of a linear equation you learn is in one variable, for instance:

Lectures on Linear Algebra for IT

Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras

JACOBI S ITERATION METHOD

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Numerical Methods Lecture 2 Simultaneous Equations

Solving Systems of Linear Equations

MA2501 Numerical Methods Spring 2015

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix

The purpose of computing is insight, not numbers. Richard Wesley Hamming

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Computational Linear Algebra

Solution of Matrix Eigenvalue Problem

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

A Review of Linear Algebra

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I: Numerical linear algebra

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

Dense LU factorization and its error analysis

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Numerical Linear Algebra

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Solving Dense Linear Systems I

Linear System of Equations

Solving Linear Systems

LECTURE 12: SOLUTIONS TO SIMULTANEOUS LINEAR EQUATIONS. Prof. N. Harnew University of Oxford MT 2012

Solving Linear Systems of Equations

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Systems of Linear Equations

Transcription:

Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 FIGURE A S UVT

Agenda of today s lecture 1 Linear Systems. 2 Gauss Elimination FIGURE A S 1.2

We ve got a problem 210 LINEAR ALGEBRAIC EQUATIONS AND MATRICES 3 bungee jumpers connected by bungee cords. After they are released, ity takes hold we have (b). Compute the displacement of each e jumpers. cord linear spring x 1 0 x 2 0 x 3 0 (a) Unstretched (b) Stretched FIGURE 8.1 Three individuals connected by bungee cords. FIGURE k 1 x 1 A S m 1 g k 2 (x 2 his o FIGURE 8.2 Free-body dia 1.3 Using Newton s second law, force balances can

dt 2 We ve d 2 x 2 m 2 got a problem 0 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) d 2 x 3 m 3 = m dt 2 3 g + k 3 (x 2 x 3 ) 0 re Use m i Newton s = the of second jumper law i (kg), + t Hooke s = time (s), law: k j = the spring constant for cord j (N/m), x i = the displacement of jumper i measured k 1 x 1 downward k 2 (x 2 from x 1 ) quilibrium k 3 (x 3 x 2 ) position (m), g = itational acceleration (9.81 m/s 2 ). Because we are interested in the steady-state 0 solution, the second derivatives can be set to zero. Collecting terms gives (k 1 + k 2 )x 1 k 2 x 2 = m 1 g k 2 x 1 + (k 2 + k 3 )x 2 k 3 x 3 = m 2 g k 3 x 2 + k 3 x 3 = m 3 g Thus, the problem reduces to solving a m ree simultaneous equations for the three unknown displacements. Because we have used a linear law for the cords, these (a) Unstretched (b) Stretched m equations are linear algebraic equations. Chapters 8 1 g k through 2 (x 2 x 1 ) m 12 2 g k introduce 3 (x 3 x 2 ) m you to 3 g how MATLAB is used to solve such ms of equations. URE 8.1 FIGURE 8.2 e individuals connected by bungee cords. Free-body diagrams. Using Newton s second law, force balances can be written for each jumper: m 1 d 2 x 1 dt 2 = m 1 g + k 2 (x 2 x 1 ) k 1 x 1 (8.2) m 2 d 2 x 2 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) FIGURE A S m 3 d 2 x 3 = m 3 g + k 3 (x 2 x 3 ) 1.4

dt 2 We ve d 2 x 2 m 2 got a problem 0 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) d 2 x 3 m 3 = m dt 2 3 g + k 3 (x 2 x 3 ) 0 re Use m i Newton s = the of second jumper law i (kg), + t Hooke s = time (s), law: k j = the spring constant for cord j (N/m), x i = the displacement of jumper i measured k 1 x 1 downward k 2 (x 2 from x 1 ) quilibrium k 3 (x 3 x 2 ) position (m), g = itational acceleration (9.81 m/s 2 ). Because we are interested in the steady-state 0 solution, the second derivatives can be set to zero. Collecting terms gives (k 1 + k 2 )x 1 k 2 x 2 = m 1 g k 2 x 1 + (k 2 + k 3 )x 2 k 3 x 3 = m 2 g k 3 x 2 + k 3 x 3 = m 3 g Thus, the problem reduces to solving a m ree simultaneous equations for the three unknown displacements. Because we have used a linear law for the cords, these (a) Unstretched (b) Stretched m equations are linear algebraic equations. Chapters 8 1 g k through 2 (x 2 x 1 ) m 12 2 g k introduce 3 (x 3 x 2 ) m you to 3 g how MATLAB We can is used write to solve thissuch as ms of equations. URE 8.1 FIGURE 8.2 Ax = b. e individuals connected by bungee cords. Free-body diagrams. Using Newton s second law, force balances can be written for each jumper: m 1 d 2 x 1 dt 2 = m 1 g + k 2 (x 2 x 1 ) k 1 x 1 (8.2) m 2 d 2 x 2 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) FIGURE A S m 3 d 2 x 3 = m 3 g + k 3 (x 2 x 3 ) 1.4

dt 2 We ve d 2 x 2 m 2 got a problem 0 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) d 2 x 3 m 3 = m dt 2 3 g + k 3 (x 2 x 3 ) 0 re Use m i Newton s = the of second jumper law i (kg), + t Hooke s = time (s), law: k j = the spring constant for cord j (N/m), x i = the displacement of jumper i measured k 1 x 1 downward k 2 (x 2 from x 1 ) quilibrium k 3 (x 3 x 2 ) position (m), g = itational acceleration (9.81 m/s 2 ). Because we are interested in the steady-state 0 solution, the second derivatives can be set to zero. Collecting terms gives (k 1 + k 2 )x 1 k 2 x 2 = m 1 g k 2 x 1 + (k 2 + k 3 )x 2 k 3 x 3 = m 2 g k 3 x 2 + k 3 x 3 = m 3 g Thus, the problem reduces to solving a m ree simultaneous equations for the three unknown displacements. Because we have used a linear law for the cords, these (a) Unstretched (b) Stretched m equations are linear algebraic equations. Chapters 8 1 g k through 2 (x 2 x 1 ) m 12 2 g k introduce 3 (x 3 x 2 ) m you to 3 g how MATLAB We can is used write to solve thissuch as ms of equations. URE 8.1 FIGURE 8.2 Ax = b. e individuals connected by bungee cords. Free-body diagrams. How do we solve such problems? Using Newton s second law, force balances can be written for each jumper: m 1 d 2 x 1 dt 2 = m 1 g + k 2 (x 2 x 1 ) k 1 x 1 (8.2) m 2 d 2 x 2 dt 2 = m 2 g + k 3 (x 3 x 2 ) + k 2 (x 1 x 2 ) (8.1) FIGURE A S m 3 d 2 x 3 = m 3 g + k 3 (x 2 x 3 ) 1.4

For few equations FIGURE A S 1.5

For few equations substitution FIGURE A S 1.5

For few equations substitution elimination of variables FIGURE A S 1.5

For few equations substitution elimination of variables Cramer s rule FIGURE A S 1.5

Summing up equations manipulated so as to result in one equation with one unknown FIGURE A S 1.6

Summing up equations manipulated so as to result in one equation with one unknown consequently: equation solved directly & result substituted back into original equations to solve for the rining FIGURE A S 1.6

Summing up equations manipulated so as to result in one equation with one unknown consequently: equation solved directly & result substituted back into original equations to solve for the rining This is the basic for an algorithm that works for large ms: FIGURE A S 1.6

Summing up equations manipulated so as to result in one equation with one unknown consequently: equation solved directly & result substituted back into original equations to solve for the rining This is the basic for an algorithm that works for large ms: eliminate unknowns FIGURE A S 1.6

Summing up equations manipulated so as to result in one equation with one unknown consequently: equation solved directly & result substituted back into original equations to solve for the rining This is the basic for an algorithm that works for large ms: eliminate unknowns substitute back FIGURE A S 1.6

is the most basic ese schemes. This section includes the matic techniques for forward elimination b stitution that comprise Gauss elimination. Although these techniques are ideally su implementation on computers, some modifications be required to obtain a algorithm. In particular, the computer program must avoid division by zero. The ing method is called naive Gauss elimination because it does not avoid this p Section 9.3 deal with the additional features required for an effective c program. Naive because it does not avoid division by zero! The approach is designed to solve a gen set of n equations: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 + +a 2n x n = b 2.. a n1 x 1 + a n2 x 2 + a n3 x 3 + +a nn x n = b n FIGURE A S 1.7

a n3 x 3 + +a nn x n = b n re the double prime indicates that lements have been modified twice. The procedure can be continued using the rining pivot equations. The f Elimination manipulation of Unknowns: in the sequence First reduce is to use the ms (n 1)th matrix equation to eliminate the x n 1 t an upper triangular from the nth one. equation. At this point, the m have been transformed to an u triangular m: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1n x n = b 1 (9. FIGURE A S a 22 x 2 + a 23 x 3 + +a 2n x n = b 2 (9. a 33 x 3 + +a 3n x n = b 3 (9.... a nn (n 1) x n = b n (n 1) (9. SubstituteBack back: Substitution. Equation (9.11d) can now be solved for x n : x n = b(n 1) n j=i+1 (9 x n = b(n 1) n a (n 1) b (i 1) nn i n a (i 1) ij x j j=i+1, x This result can be back-substituted into the (n 1)th equation to solve for x n 1. The a (n 1) i =, i = 1, n 1. nn a (i 1) cedure, which is repeated ii to evaluate the rining x s, can be represented by following formula: n b (i 1) i a (i 1) ij x j 1.8

Example Use Gauss elimination to solve: 3x 1 0.1x 2 0.2x 3 = 7.85 0.1x 1 + 7x 2 0.3x 3 = 19.3 0.3x 1 0.2x 2 + 10x 3 = 71.4 FIGURE A S 1.9

Example Use Gauss elimination to solve: 3x 1 0.1x 2 0.2x 3 = 7.85 0.1x 1 + 7x 2 0.3x 3 = 19.3 0.3x 1 0.2x 2 + 10x 3 = 71.4 Exact solution: x 1 = 3, x 2 = 2.5, x 3 = 7. FIGURE A S 1.9

Counting the operations Execution time depends on floating-point operations (or flops). Performing addition/subtraction multiplication/division is about the same. Summing up the ber of flops important in understing FIGURE A S 0

Counting the operations Execution time depends on floating-point operations (or flops). Performing addition/subtraction multiplication/division is about the same. Summing up the ber of flops important in understing which are the most time-consuming parts FIGURE A S 0

Counting the operations Execution time depends on floating-point operations (or flops). Performing addition/subtraction multiplication/division is about the same. Summing up the ber of flops important in understing which are the most time-consuming parts how computation time increases as ms get larger FIGURE A S 0

[135]:= Quit ; In[1]:= ut[2]= Gauss elimination algorithm SquareMatrixQ MatrixQ && Equal Dimensions &; SquareMatrixQ x True [16]:= Gauss A_?SquareMatrixQ, b_? ArrayQ : Module nb, Aug, n, factor, aux, x, If Length A Length b, Print "Error: dimensions of A b do not correspond" ; Abort ; n Length A ; nb n 1; Aug MapThread Append, A, b ; or Join A,Transpose b,2 For k 1, k n 1, k, For i k 1, i n, i, factor Aug i, k Aug k, k ; Aug i, k ;; nb Aug i, k ;; nb factor Aug k, k ;; nb ; Aug n, nb x Aug n, n ; For i n 1, i 1, i, Aug i, nb Aug i, i 1 ;; n.x aux ; x Prepend x, aux ; ; Aug i, i x [10]:= A 3, 0.1, 0.2, 0.1, 7, 0.3, 0.3, -0.2, 10 ut[10]= 3, 0.1, 0.2, 0.1, 7, 0.3, 0.3, 0.2, 10 [14]:= b 19.3, 71.4 ; FIGURE A S 1

Naive Gauss: counting operations Outer loop: n 1 iterations FIGURE A S 2

Naive Gauss: counting operations Outer loop: n 1 iterations Inner loop: one division; FIGURE A S 2

Naive Gauss: counting operations Outer loop: n 1 iterations Inner loop: one division; for every column element (from 2 to nb) - one subtraction & one multiplication FIGURE A S 2

Naive Gauss: counting operations Outer loop: n 1 iterations Inner loop: one division; for every column element (from 2 to nb) - one subtraction & one multiplication total n subtractions & n + 1 multiplication/division FIGURE A S 2

Therefore, the limits on the inner loop are from i = 2 to n. According to Eq. (9.14d), this means that the ber of iterations e inner loop be Naive n Gauss: counting operations 1 = n 2 + 1 = n 1 (9.15) i=2 For Outer every one loop: ese n iterations, 1 iterations there is one division to calculate the factor. The next line then Inner performs loop: a multiplication a subtraction for each column element from 2 to nb. Because nb = n + 1, going from 2 to nb results in n multiplications n subtractions. Together with one the division; single division, this amounts to n + 1 multiplications/divisions n addition/subtractions for every column for every element iteration of (from the inner 2 to loop. nb) The - one total for subtraction the first pass through the outer loop is therefore (n 1)(n + 1) multiplication/divisions (n 1)(n) addition/subtractions. & one multiplication total Similar nreasoning subtractions can be used & n to + estimate 1 multiplication/division the flops for the subsequent iterations e outer loop. These can be summarized as Outer Loop Inner Loop Addition/Subtraction Multiplication/Division k i Flops Flops 1 2, n (n 1)(n) (n 1)(n + 1) 2 3, n (n 2)(n 1) (n 2)(n).. k k + 1, n (n k)(n + 1 k) (n k)(n + 2 k).. n 1 n, n (1)(2) (1)(3) Therefore, the total addition/subtraction flops for elimination can be computed as n 1 n 1 (n k)(n + 1 k) = [n(n + 1) k(2n + 1) + k 2 ] (9.16) k=1 k=1 FIGURE A S 2

addition/subtractions. Naive Similar Gauss: reasoning can counting be used to operations estimate the flops for the subsequent iterations e outer loop. These can be summarized as Outer Loop Inner Loop Addition/Subtraction Multiplication/Division k i Flops Flops 1 2, n (n 1)(n) (n 1)(n + 1) 2 3, n (n 2)(n 1) (n 2)(n).. k k + 1, n (n k)(n + 1 k) (n k)(n + 2 k).. n 1 n, n (1)(2) (1)(3) Therefore, the total addition/subtraction flops for elimination can be computed as n 1 n 1 Total addition/subtraction flops for elimination: (n k)(n + 1 k) = [n(n + 1) k(2n + 1) + k 2 ] (9.16) k=1 k=1 n 1 (n k)(n + 1 k) = n3 3 + O(n). k=1 For multiplication/division flops: n 3 3 + O(n2 ). Rrk: As n gets large, the lower terms become negligible! FIGURE A S 3

Thus, the total ber of flops is equal to 2n /3 plus an additional comp portional Naive Gauss: to terms counting of order operations n 2 lower. The result is written in this way becaus large, the O(n 2 ) lower terms become negligible. We are therefore justified in ing that for large n, ffort involved in forward elimination converges on 2n 3 For Because elimination: only a single loop is used, back substitution is much simpler to eva ber of addition/subtraction 2n 3 3 + flops is equal to n(n 1)/2. Because of xtr O(n2 ). prior to the loop, the ber of multiplication/division flops is n(n + 1)/2. Th added For substituting to arrive at back, a total total of ber of flops: n 2 + O(n) n 2 + O(n). Thus, Total ffort total ineffort naive in Gauss naive Gauss elimination: elimination can be represented as 2n 3 3 + O(n2 ) }{{} Forward elimination + n 2 + O(n) } {{ } Back substitution as n increases 2n3 3 + O(n2 ) Two useful gen conclusions can be drawn from this analysis: FIGURE A S 1. As the m gets larger, the computation time increases greatly. As in Tab amount of flops increases nearly three orders of magnitude for every order 4

Naive Gauss: counting operations Conclusions: as the m gets larger, the computation time increases greatly the amount of flops increases nearly three orders of magnitude for every order of magnitude increase in the ber of equations; FIGURE A S 5

Naive Gauss: counting operations Conclusions: as the m gets larger, the computation time increases greatly the amount of flops increases nearly three orders of magnitude for every order of magnitude increase in the ber of equations; most of ffort is incurred in limination step efforts to make the method more efficient should probably focus on this step. FIGURE A S 5

How does work for 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 3x 2 + 6x 3 = 5? FIGURE A S 6

How does work for 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 3x 2 + 6x 3 = 5? The first step would involve a division by the pivot a 11 = 0. FIGURE A S 6

How does work for 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 3x 2 + 6x 3 = 5? The first step would involve a division by the pivot a 11 = 0. Rrk: problems arise also n the pivot element (the element in Gauss elimination that we re dividing by) is close to zero roundoff errors introduced! FIGURE A S 6

Partial complete pivoting Solution: before each row is worked, determine the coefficient with the largest absolute value in the column below the pivot element; switch rows such that the largest element is the pivot element. This method is called partial pivoting. FIGURE A S 7

Partial complete pivoting Solution: before each row is worked, determine the coefficient with the largest absolute value in the column below the pivot element; switch rows such that the largest element is the pivot element. This method is called partial pivoting. If both columns rows are searched for the largest element, then switched: complete pivoting FIGURE A S 7

Partial complete pivoting Solution: before each row is worked, determine the coefficient with the largest absolute value in the column below the pivot element; switch rows such that the largest element is the pivot element. This method is called partial pivoting. If both columns rows are searched for the largest element, then switched: complete pivoting rarely used because most significant improvement comes from the partial pivoting. FIGURE A S 7

An example Use Gauss elimination to solve 0.0003x 1 + 3.0000x 2 = 2.0001 1.0000x 1 + 1.0000x 2 = 1.0000. Repeat the computation with partial pivoting. The exact solution is x 1 = 1/3, x 2 = 2/3. FIGURE A S 8

ue Anto example subtractive cancellation, the result is very sensitive to the ber of significant ote With how naive the solution Gauss for elimination x gures carried in the computation: 1 is highly dependent on the ber of significant figures. This because in Eq. (E9.4.1), we are subtracting two almost-equal bers. On the other h, if quations are solved in reverse Absolute order, Value the row of with the larger ivot Significant element is normalized. The equations are Percent Relative Figures x 2 x 1 Error for x 1.0000x 1 1 + 1.0000x 2 = 1.0000 0.0003x 3 0.667 3.33 1099 1 + 3.0000x 2 = 2.0001 4 0.6667 0.0000 100 limination 5 substitution 0.66667 again yields 0.30000 x 2 = 2/3. For different 10 bers of significant gures, x6 1 can be computed 0.666667 from the first 0.330000 equation, as in 1 7 0.6666667 0.3330000 0.1 x 1 = 1 (2/3) 1 te With case how is partial the much solution pivoting less for sensitive x 1 is highly to the dependent ber of on significant the ber figures of significant in the computation: figures. This because in Eq. (E9.4.1), we are subtracting two almost-equal bers. On the other h, if quations are solved in reverse Absolute order, Value the row of with the larger ivot Significant element is normalized. The equations are Percent Relative Figures x 1.0000x 1 + 1.0000x 2 = 2 x 1.0000 1 Error for x 1 0.0003x 3 1 + 3.0000x 0.667 2 = 2.0001 0.333 0.1 4 0.6667 0.3333 0.01 limination 5 substitution 0.66667again yields 0.33333 x 2 = 2/3. For different 0.001bers of significant gures, x 1 6can be computed 0.666667 from the first 0.333333 equation, as in 0.0001 7 x 1 = 1 (2/3) 0.6666667 0.3333333 0.0000 FIGURE A S 9

with Gauss elimination For a triangular matrix (matrix having zeros either above or below the main diagonal) det A = a 11 a 22 a 33... a nn. FIGURE A S 1.20

with Gauss elimination For a triangular matrix (matrix having zeros either above or below the main diagonal) For : det A = a 11 a 22 a 33... a nn. det A = a 11 a (1) 22 a(2) 33 a(n 1) nn. FIGURE A S 1.20

with Gauss elimination For a triangular matrix (matrix having zeros either above or below the main diagonal) For : det A = a 11 a 22 a 33... a nn. det A = a 11 a (1) 22 a(2) 33 a(n 1) nn. For Gauss elimination with partial pivoting: det A = ( 1) p a 11 a (1) 22 a(2) 33 a(n 1) nn, p - the ber of times that rows are pivoted. FIGURE A S 1.20