NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

Similar documents
NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Next topics: Solving systems of linear equations

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Solving Linear Systems of Equations

Chapter 9: Systems of Equations and Inequalities

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

The Solution of Linear Systems AX = B

Lectures on Linear Algebra for IT

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Section Gaussian Elimination

Solving Linear Systems

Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION. Jones & Bartlett Learning, LL NOT FOR SALE OR DISTRIBUT

AIMS Exercise Set # 1

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Gauss-Seidel method. Dr. Motilal Panigrahi. Dr. Motilal Panigrahi, Nirma University

2 Systems of Linear Equations

arxiv: v1 [math.na] 5 May 2011

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Math Linear Algebra Final Exam Review Sheet

Computational Methods. Systems of Linear Equations

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

CPE 310: Numerical Analysis for Engineers

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Course Notes: Week 1

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

3.3 Real Zeros of Polynomial Functions

Elementary Linear Algebra

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

LINEAR SYSTEMS, MATRICES, AND VECTORS

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Gaussian Elimination and Back Substitution

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Introduction to Matrices

1 Number Systems and Errors 1

Matrices and Systems of Equations

Iterative Methods for Solving A x = b

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Solving Linear Systems Using Gaussian Elimination

Rectangular Systems and Echelon Forms

Review Questions REVIEW QUESTIONS 71

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Matrices, Row Reduction of Matrices

Numerical Methods. King Saud University

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

Numerical Methods - Numerical Linear Algebra

3.2 Iterative Solution Methods for Solving Linear

Linear Algebra. James Je Heon Kim

Chapter 4. Solving Systems of Equations. Chapter 4

8 Square matrices continued: Determinants

Linear equations The first case of a linear equation you learn is in one variable, for instance:

Some Notes on Linear Algebra

Chapter 3 Transformations

Sequences and the Binomial Theorem

Linear Algebra Using MATLAB

1 System of linear equations

1 Systems of Linear Equations

Math 1314 Week #14 Notes

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

9.1 - Systems of Linear Equations: Two Variables

1 - Systems of Linear Equations

Matrices and Matrix Algebra.

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

4. Determinants.

16. . Proceeding similarly, we get a 2 = 52 1 = , a 3 = 53 1 = and a 4 = 54 1 = 125

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

MTH 2032 Semester II

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATRICES. a m,1 a m,n A =

Numerical Linear Algebra

CHAPTER 7: Systems and Inequalities

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

Systems of Linear Equations

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Matrices, Moments and Quadrature, cont d

MAC Module 1 Systems of Linear Equations and Matrices I

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Fundamentals of Engineering Analysis (650163)

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

1 Last time: determinants

1 Determinants. 1.1 Determinant

Chapter 1: Systems of Linear Equations and Matrices

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Transcription:

. Gaussian Elimination with Partial Pivoting. Iterative Methods for Solving Linear Systems.3 Power Method for Approximating Eigenvalues.4 Applications of Numerical Methods Carl Gustav Jacob Jacobi 8 4 8 5 METHODS C arl Gustav Jacob Jacobi was the second son of a successful banker in Potsdam, Germany. After completing his secondary schooling in Potsdam in 8, he entered the University of Berlin. In 85, having been granted a doctorate in mathematics, Jacobi served as a lecturer at the University of Berlin. Then he accepted a position in mathematics at the University of Königsberg. Jacobi s mathematical writings encompassed a wide variety of topics, including elliptic functions, functions of a complex variable, functional determinants (called Jacobians), differential equations, and Abelian functions. Jacobi was the first to apply elliptic functions to the theory of numbers, and he was able to prove a longstanding conjecture by Fermat that every positive integer can be NUMERICAL written as the sum of four perfect squares. (For instance,.) He also contributed to several branches of mathematical physics, including dynamics, celestial mechanics, and fluid dynamics. In spite of his contributions to applied mathematics, Jacobi did not believe that mathematical research needed to be justified by its applicability. He stated that the sole end of science and mathematics is the honor of the human mind and that a question about numbers is worth as much as a question about the system of the world. Jacobi was such an incessant worker that in 84 his health failed and he retired to Berlin. By the time of his death in 85, he had become one of the most famous mathematicians in Europe.. GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING In Chapter two methods for solving a system of n linear equations in n variables were discussed. When either of these methods (Gaussian elimination and Gauss-Jordan elimination) is used with a digital computer, the computer introduces a problem that has not yet been discussed rounding error. Digital computers store real numbers in floating point form, ±M k, where k is an integer and the mantissa M satisfies the inequality. M <. For instance, the floating point forms of some real numbers are as follows. Real Number Floating Point Form 57.57 3 3.863.3863.45.45 3 57

SECTION. GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING 57 The number of decimal places that can be stored in the mantissa depends on the computer. If n places are stored, then it is said that the computer stores n significant digits. Additional digits are either truncated or rounded off. When a number is truncated to n significant digits, all digits after the first n significant digits are simply omitted. For instance, truncated to two significant digits, the number.5 becomes.. When a number is rounded to n significant digits, the last retained digit is increased by one if the discarded portion is greater than half a digit, and the last retained digit is not changed if the discarded portion is less than half a digit. For instance, rounded to two significant digits,.5 becomes.3 and.49 becomes.. For the special case in which the discarded portion is precisely half a digit, round so that the last retained digit is even. So, rounded to two significant digits,.5 becomes. and.35 becomes.4. Whenever the computer truncates or rounds, a rounding error that can affect subsequent calculations is introduced. The result after rounding or truncating is called the stored value. EXAMPLE Finding the Stored Value of Number Determine the stored value of each of the following real numbers in a computer that rounds to three significant digits. (a) 54.7 (b).34 (c) 8.56 (d).8335 (e).8345 Solution Number Floating Point Form Stored Value (a) 54.7 (b).34 (c) 8.56 (d).8335 (e).8345.547.34.856.8335.8345.547.3.83.834.834 Note in parts (d) and (e) that when the discarded portion of a decimal is precisely half a digit, the number is rounded so that the stored value ends in an even digit. REMARK: Most computers store numbers in binary form (base two) rather than decimal form (base ten). Because rounding occurs in both systems, however, this discussion will be restricted to the more familiar base ten. Rounding error tends to propagate as the number of arithmetic operations increases. This phenomenon is illustrated in the following example. EXAMPLE Propagation of Rounding Error Evaluate the determinant of the matrix A...3.,

57 CHAPTER NUMERICAL METHODS when rounding each intermediate calculation to two significant digits. Then find the exact solution and compare the two results. Solution TECHNOLOGY NOTE You can see the effect of rounding on a calculator. For example, the determinant of A 3 6 is 4. However, the TI-86 calculates the greatest integer of the determinant of A to be 5: int det A 5. Do you see what happened? EXAMPLE 3 Rounding each intermediate calculation to two significant digits, produces the following. A However, the exact solution is A.)(.) (.)(.3).44.76.4.8.4.44.76.3. Round to two significant digits So, to two significant digits, the correct solution is.3. Note that the rounded solution is not correct to two significant digits, even though each arithmetic operation was performed with two significant digits of accuracy. This is what is meant when it is said that arithmetic operations tend to propagate rounding error. In Example, rounding at the intermediate steps introduced a rounding error of.3 (.4).8. Rounding error Although this error may seem slight, it represents a percentage error of.8.6 6.%..3 Percentage error In most practical applications, a percentage error of this magnitude would be intolerable. Keep in mind that this particular percentage error arose with only a few arithmetic steps. When the number of arithmetic steps increases, the likelihood of a large percentage error also increases. Gaussian Elimination with Partial Pivoting For large systems of linear equations, Gaussian elimination can involve hundreds of arithmetic computations, each of which can produce rounding error. The following straightforward example illustrates the potential magnitude of the problem. Gaussian Elimination and Rounding Error Use Gaussian elimination to solve the following system..43x.357x.x 3 5.73.3x.9x.99x 3 5.458.x 4.3x.65x 3 4.45 After each intermediate calculation, round the result to three significant digits.

SECTION. GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING 573 Solution Applying Gaussian elimination to the augmented matrix for this system produces the following..43.3...3.................357.9 4.3.5.9 4.3.5 4.9 4.3.5 4. 4.9.5 3.3 59..5. 3.3 59..5...5....99.65 4. 36..99 5.46.65 4.4 4. 36..5 5.9.65 4.4 4. 4.89 4. 36. 4.89.6.. 4. 4.89. 5.7 5.46 4.4 36. 5.9 49. 36..6 49. 36..6. Dividing the first row by.43 produces a new first row. Adding.3 times the first row to the second row produces a new second row. Adding. times the first row to the third row produces a new third row. Dividing the second row by 4.9 produces a new second row. Adding 3.3 times the second row to the third row produces a new third row. Multiplying the third row by produces a new third row. So x 3., and using back-substitution, you can obtain x.8 and x.95. Try checking this solution in the original system of equations to see that it is not correct. The correct solution is x, x, and x 3 3. What went wrong with the Gaussian elimination procedure used in Example 3? Clearly, rounding error propagated to such an extent that the final solution became hopelessly inaccurate. Part of the problem is that the original augmented matrix contains entries that differ in orders of magnitude. For instance, the first column of the matrix.43.3..357.9 4.3. 5.7.99 5.46.65 4.4 has entries that increase roughly by powers of ten as one moves down the column. In subsequent elementary row operations, the first row was multiplied by.3 and. and

574 CHAPTER NUMERICAL METHODS the second row was multiplied by 3.3. When floating point arithmetic is used, such large row multipliers tend to propagate rounding error. This type of error propagation can be lessened by appropriate row interchanges that produce smaller multipliers. One method for restricting the size of the multipliers is called Gaussian elimination with partial pivoting. Gaussian Elimination with Partial Pivoting. Find the entry in the left column with the largest absolute value. This entry is called the pivot.. Perform a row interchange, if necessary, so that the pivot is in the first row. 3. Divide the first row by the pivot. (This step is unnecessary if the pivot is.) 4. Use elementary row operations to reduce the remaining entries in the first column to zero. The completion of these four steps is called a pass. After performing the first pass, ignore the first row and first column and repeat the four steps on the remaining submatrix. Continue this process until the matrix is in row-echelon form. Example 4 shows what happens when this partial pivoting technique is used on the system of linear equations given in Example 3. EXAMPLE 4 Gaussian Elimination with Partial Pivoting Use Gaussian elimination with partial pivoting to solve the system of linear equations given in Example 3. After each intermediate calculation, round the result to three significant digits. Solution As in Example 3, the augmented matrix for this system is.43.3. Pivot In the left column. is the pivot because it is the entry that has the largest absolute value. So, interchange the first and third rows and apply elementary row operations as follows...3.43..3.43.357.9 4.3 4.3.9.357.384.9.357..99.65.65.99..54.99. 5.7 5.46 4.4. 4.4 5.46 5.7.395 5.46 5.7 Interchange the first and third rows. Dividing the first row by. produces a new first row.

This completes the first pass. For the second pass consider the submatrix formed by deleting the first row and first column. In this matrix the pivot is.4, which means that the second and third rows should be interchanged. Then proceed with Gaussian elimination as follows. Pivot This completes the second pass, and you can complete the entire procedure by dividing the third row by.8 as follows....43............... SECTION. GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING 575.384.48.357.384.48.4.384.4.48.384..48.384...384...54.9..54.9..54..9.54 4.9.9.54 4.9.8.54 4.9..395 4.94 5.7.395 4.94 5.3.395 5.3 4.94.395.7 4.94.395.7.4.395.7 3. Adding.3 times the first row to the second row produces a new second row. Adding.43 times the first row to the third row produces a new third row. Interchange the second and third rows. Dividing the second row by.4 produces a new second row. Adding.48 times the second row to the third row produces a new third row. Dividing the third row by.8 produces a new third row. So x 3 3., and back-substitution produces x. and x., which agrees with the exact solution of x, x, and x 3 3 when rounded to three significant digits. REMARK: Note that the row multipliers used in Example 4 are.3,.43, and.48, as contrasted with the multipliers of.3,., and 3.3 encountered in Example 3. The term partial in partial pivoting refers to the fact that in each pivot search only entries in the left column of the matrix or submatrix are considered. This search can be extended to include every entry in the coefficient matrix or submatrix; the resulting technique is called Gaussian elimination with complete pivoting. Unfortunately, neither complete pivoting nor partial pivoting solves all problems of rounding error. Some systems of linear

576 CHAPTER NUMERICAL METHODS equations, called ill-conditioned systems, are extremely sensitive to numerical errors. For such systems, pivoting is not much help. A common type of system of linear equations that tends to be ill-conditioned is one for which the determinant of the coefficient matrix is nearly zero. The next example illustrates this problem. EXAMPLE 5 An Ill-Conditioned System of Linear Equations Use Gaussian elimination to solve the following system of linear equations. x y x 4 y 4 Round each intermediate calculation to four significant digits. Solution Using Gaussian elimination with rational arithmetic, you can find the exact solution to be y 8 and x 8. But rounding 44.5 to four significant digits introduces a large rounding error, as follows...., So y, and back-substitution produces x y,. This solution represents a percentage error of 5% for both the x-value and the y-value. Note that this error was caused by a rounding error of only.5 (when you rounded.5 to.).

SECTION. EXERCISES 577 SECTION. EXERCISES In Exercises 8, express the real number in floating point form.. 48. 3.6 3..6 4.. 5.. 6..6 7. 8. In Exercises 9 6, determine the stored value of the real number in a computer that rounds to (a) three significant digits and (b) four significant digits. 9. 33..4. 9.646. 6.964 7 5 3. 4. 5. 6. In Exercises 7 and 8, evaluate the determinant of the matrix, rounding each intermediate calculation to three significant digits. Then compare the rounded value with the exact solution. 7..4 56. 8. 66.. In Exercises 9 and, use Gaussian elimination to solve the system of linear equations. After each intermediate calculation, round the result to three significant digits. Then compare this solution with the exact solution. 9..x 6.7y 8.8. 4.4x 7.y 3.5 4.66x 64.4y.. 8.6x 97.4y 79. In Exercises 4, use Gaussian elimination without partial pivoting to solve the system of linear equations, rounding to three significant digits after each intermediate calculation. Then use partial pivoting to solve the same system, again rounding to three significant digits after each intermediate calculation. Finally, compare both solutions with the given exact solution.. 6x.4y.4..5x 99.6y 997.7. 6x 6.y.. 99.x 449.y 54. Exact: x, y Exact: x, y 3. 3. 3. 6 x 4.y.445z. x 4.y.6z. x 4.5y.5z.385 Exact: x.49, y., z 4..7x 6.y.93z 6.3 4. 4.8x 5.9y.z. 4. 8.4x 6.y.8z 83.7 Exact: x, y, z 3 8 7..7 4.. 6 6 In Exercises 5 and 6, use Gaussian elimination to solve the ill-conditioned system of linear equations, rounding each intermediate calculation to three significant digits. Then compare this solution with the given exact solution. 5. x y 6. x 8 y 6 x 6 y x y 5 Exact: x,8, Exact: x 48,, y,88 y 48,6 7. Consider the ill-conditioned systems x y and x y x.y x.y.. Calculate the solution to each system. Notice that although the systems are almost the same, their solutions differ greatly. 8. Repeat Exercise 7 for the systems x y and x y.x y..x y. 9. The Hilbert matrix of size n n is the n n symmetric matrix H n a ij, where a ij i j. As n increases, the Hilbert matrix becomes more and more ill-conditioned. Use Gaussian elimination to solve the following system of linear equations, rounding to two significant digits after each intermediate calculation. Compare this solution with the exact solution x 3, x 4, and x 3 3. x x 3 x 3 x 3 x 4 x 3 3 x 4 x 5 x 3 3. Repeat Exercise 9 for H where b,,, T 4 x b,, rounding to four significant digits. Compare this solution with the exact solution x 4, x 6, x 3 8, and x 4 4. 3. The inverse of the n n Hilbert matrix H n has integer entries. Use your computer of graphing calculator to calculate the inverses of the Hilbert matrices H n for n 4, 5, 6, and 7. For what values of n do the inverses appear to be accurate? 8

578 CHAPTER NUMERICAL METHODS. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after a single application of Gaussian elimination. Once a solution has been obtained, Gaussian elimination offers no method of refinement. The lack of refinements can be a problem because, as the previous section shows, Gaussian elimination is sensitive to rounding error. Numerical techniques more commonly involve an iterative method. For example, in calculus you probably studied Newton s iterative method for approximating the zeros of a differentiable function. In this section you will look at two iterative methods for approximating the solution of a system of n linear equations in n variables. The Jacobi Method The first iterative technique is called the Jacobi method, after Carl Gustav Jacob Jacobi (84 85). This method makes two assumptions: () that the system given by a x a x... a n x n b a x a x... a n x n b.... a n x a n x... a nn x n b n has a unique solution and () that the coefficient matrix A has no zeros on its main diagonal. If any of the diagonal entries a, a,..., a nn are zero, then rows or columns must be interchanged to obtain a coefficient matrix that has nonzero entries on the main diagonal. To begin the Jacobi method, solve the first equation for x, the second equation for x, and so on, as follows. x a (b a x a 3 x 3... a n x n ) x (b a a x a 3 x 3... a n x n ). x n (b a n a n x a n x... a n,n x n nn Then make an initial approximation of the solution, (x, x, x 3,..., x n ), Initial approximation and substitute these values of x i into the right-hand side of the rewritten equations to obtain the first approximation. After this procedure has been completed, one iteration has been

SECTION. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS 579 performed. In the same way, the second approximation is formed by substituting the first approximation s x-values into the right-hand side of the rewritten equations. By repeated iterations, you will form a sequence of approximations that often converges to the actual solution. This procedure is illustrated in Example. EXAMPLE Applying the Jacobi Method Use the Jacobi method to approximate the solution of the following system of linear equations. 5x x 3x 3 3x 9x x 3 x x 7x 3 3 Continue the iterations until two successive approximations are identical when rounded to three significant digits. Solution To begin, write the system in the form x 5 5 x 3 5 x 3 x 9 3 9 x 9 x 3 x 3 3 7 7 x 7 x. Because you do not know the actual solution, choose x, x, x 3 Initial approximation as a convenient initial approximation. So, the first approximation is x 5 5 () 3 5 (). x 9 3 9 () 9 (). x 3 3 7 7 () 7 ().49. Continuing this procedure, you obtain the sequence of approximations shown in Table.. TABLE. n 3 4 5 6 7 x...46.9.8.85.86.86 x...3.38.33.39.33.33 x 3..49.57.46.4.44.43.43

58 CHAPTER NUMERICAL METHODS Because the last two columns in Table. are identical, you can conclude that to three significant digits the solution is x.86, x.33, x 3.43. For the system of linear equations given in Example, the Jacobi method is said to converge. That is, repeated iterations succeed in producing an approximation that is correct to three significant digits. As is generally true for iterative methods, greater accuracy would require more iterations. The Gauss-Seidel Method You will now look at a modification of the Jacobi method called the Gauss-Seidel method, named after Carl Friedrich Gauss (777 855) and Philipp L. Seidel (8 896). This modification is no more difficult to use than the Jacobi method, and it often requires fewer iterations to produce the same degree of accuracy. With the Jacobi method, the values of x i obtained in the nth approximation remain unchanged until the entire (n ) th approximation has been calculated. With the Gauss- Seidel method, on the other hand, you use the new values of each x i as soon as they are known. That is, once you have determined x from the first equation, its value is then used in the second equation to obtain the new x. Similarly, the new x and x are used in the third equation to obtain the new x 3, and so on. This procedure is demonstrated in Example. EXAMPLE Applying the Gauss-Seidel Method Use the Gauss-Seidel iteration method to approximate the solution to the system of equations given in Example. Solution The first computation is identical to that given in Example. That is, using (x, x, x 3 ) (,, ) as the initial approximation, you obtain the following new value for x. x 5 5 () 3 5 (). Now that you have a new value for x, however, use it to compute a new value for x. That is, x 9 3 9 (.) 9 ().56. Similarly, use x. and x.56 to compute a new value for x 3. That is, x 3 3 7 7 (.) 7 (.56).58. So the first approximation is x., x.56, and x 3.58. Continued iterations produce the sequence of approximations shown in Table..

SECTION. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS 58 TABLE. n 3 4 5 x...67.9.86.86 x..56.334.333.33.33 x 3..58.49.4.43.43 Note that after only five iterations of the Gauss-Seidel method, you achieved the same accuracy as was obtained with seven iterations of the Jacobi method in Example. Neither of the iterative methods presented in this section always converges. That is, it is possible to apply the Jacobi method or the Gauss-Seidel method to a system of linear equations and obtain a divergent sequence of approximations. In such cases, it is said that the method diverges. EXAMPLE 3 An Example of Divergence Apply the Jacobi method to the system x 5x 4 7x x 6, using the initial approximation x, x,, and show that the method diverges. Solution As usual, begin by rewriting the given system in the form Then the initial approximation (, ) produces as the first approximation. Repeated iterations produce the sequence of approximations shown in Table.3. TABLE.3 x 4 5x x 6 7x. x 4 5 4 x 6 7 6 n 3 4 5 6 7 x 4 34 74 44 64 4,874 4,374 x 6 34 44 44 8574 4,874 3,4

58 CHAPTER NUMERICAL METHODS For this particular system of linear equations you can determine that the actual solution is x and x. So you can see from Table.3 that the approximations given by the Jacobi method become progressively worse instead of better, and you can conclude that the method diverges. The problem of divergence in Example 3 is not resolved by using the Gauss-Seidel method rather than the Jacobi method. In fact, for this particular system the Gauss-Seidel method diverges more rapidly, as shown in Table.4. TABLE.4 n 3 4 5 x 4 74 64 4,374 7,53,4 x 34 4 4,874,5,64 5,5,874 With an initial approximation of (x, x ) (, ), neither the Jacobi method nor the Gauss-Seidel method converges to the solution of the system of linear equations given in Example 3. You will now look at a special type of coefficient matrix A, called a strictly diagonally dominant matrix, for which it is guaranteed that both methods will converge. Definition of Strictly Diagonally Dominant Matrix An n n matrix A is strictly diagonally dominant if the absolute value of each entry on the main diagonal is greater than the sum of the absolute values of the other entries in the same row. That is, a > a a 3... a n a > a a 3... a n. a nn > a n a n... a n,n. EXAMPLE 4 Strictly Diagonally Dominant Matrices Which of the following systems of linear equations has a strictly diagonally dominant coefficient matrix? (a) 3x x 4 x 5x (b) 4 x x x 3 x x 3 4 3x 5x x 3 3

SECTION. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS 583 Solution (a) The coefficient matrix A 3 is strictly diagonally dominant because (b) The coefficient matrix A 4 3 is not strictly diagonally dominant because the entries in the second and third rows do not conform to the definition. For instance, in the second row a, a, a 3, and it is not true that a > a a 3. Interchanging the second and third rows in the original system of linear equations, however, produces the coefficient matrix A 4 3 5 5 5, 3 > and this matrix is strictly diagonally dominant. and 5 >. The following theorem, which is listed without proof, states that strict diagonal dominance is sufficient for the convergence of either the Jacobi method or the Gauss-Seidel method. Theorem. Convergence of the Jacobi and Gauss-Seidel Methods If A is strictly diagonally dominant, then the system of linear equations given by Ax b has a unique solution to which the Jacobi method and the Gauss-Seidel method will converge for any initial approximation. In Example 3 you looked at a system of linear equations for which the Jacobi and Gauss- Seidel methods diverged. In the following example you can see that by interchanging the rows of the system given in Example 3, you can obtain a coefficient matrix that is strictly diagonally dominant. After this interchange, convergence is assured. EXAMPLE 5 Interchanging Rows to Obtain Convergence Interchange the rows of the system x 5x 4 7x x 6 to obtain one with a strictly diagonally dominant coefficient matrix. Then apply the Gauss- Seidel method to approximate the solution to four significant digits.

584 CHAPTER NUMERICAL METHODS Solution Begin by interchanging the two rows of the given system to obtain 7x x 6 x 5x 4. Note that the coefficient matrix of this system is strictly diagonally dominant. Then solve for and as follows. x x you can obtain the sequence of approxi- Using the initial approximation mations shown in Table.5. TABLE.5 x 6 7 7 x x 4 5 5 x (x, x ) (, ), n 3 4 5 x..857.9959.9999.. x..974.999... So you can conclude that the solution is x and x. Do not conclude from Theorem. that strict diagonal dominance is a necessary condition for convergence of the Jacobi or Gauss-Seidel methods. For instance, the coefficient matrix of the system 4x 5x x x 3 is not a strictly diagonally dominant matrix, and yet both methods converge to the solution x and x when you use an initial approximation of x, x,. (See Exercises.)

SECTION. EXERCISES 585 SECTION. EXERCISES In Exercises 4, apply the Jacobi method to the given system of linear equations, using the initial approximation x, x,..., x n (,,..., ). Continue performing iterations until two successive approximations are identical when rounded to three significant digits.. 3x x. 4x x 6 x 4x 5 3x 5x 3. x x 4. 4x x x 3 7 x 3x x 3 x 7x x 3 x x 3x 3 6 3x 4 x 3 5. Apply the Gauss-Seidel method to Exercise. 6. Apply the Gauss-Seidel method to Exercise. 7. Apply the Gauss-Seidel method to Exercise 3. 8. Apply the Gauss-Seidel method to Exercise 4. In Exercises 9, show that the Gauss-Seidel method diverges for the given system using the initial approximation x, x,..., x n,,...,. 9. x x. x 4x x x 3 3x x. x 3x 7. x 3x x 3 5 x 3x x 3 9 3x x 5 3x x 3 3 x x 3 In Exercises 3 6, determine whether the matrix is strictly diagonally dominant. 3. 4. 3 5 5. 6. 7 6 3 6 3 5 4 3 7. Interchange the rows of the system of linear equations in Exercise 9 to obtain a system with a strictly diagonally dominant coefficient matrix. Then apply the Gauss-Seidel method to approximate the solution to two significant digits. 8. Interchange the rows of the system of linear equations in Exercise to obtain a system with a strictly diagonally dominant coefficient matrix. Then apply the Gauss-Seidel method to approximate the solution to two significant digits. 9. Interchange the rows of the system of linear equations in Exercise to obtain a system with a strictly diagonally dominant coefficient matrix. Then apply the Gauss-Seidel method to approximate the solution to two significant digits.. Interchange the rows of the system of linear equations in Exercise to obtain a system with a strictly diagonally dominant coefficient matrix. Then apply the Gauss-Seidel method to approximate the solution to two significant digits. In Exercises and, the coefficient matrix of the system of linear equations is not strictly diagonally dominant. Show that the Jacobi and Gauss-Seidel methods converge using an initial approximation of x, x,..., x n,,...,.. 4x 5x. 4 x x x 3 x x 3 x 3x x 3 7 3x x 4x 3 5 In Exercises 3 and 4, write a computer program that applies the Gauss-Siedel method to solve the system of linear equations. 3. 4x x 4. 4x x x 6x x x x 4x x x 3 x 3 x 4 5x 3 5x 4 x 3 x 4 x 3 x 4 x 4 x 3 x 3 4x 3 x 3 x 4 x 4 4x 4 x 4 x 5 x 5 x 5 6x 5 x 5 x 5 x 5 x 5 4x 5 x 5 x 6 x 6 5x 6 x 6 x 6 4x 6 x 6 x 7 4x 7 x 7 x 7 x 7 4x 7 x 7 x 8 x 8 x 8 5x 8 x 8 x 8 4x 8 3 6 5 8 8 4 4 6 6 3

586 CHAPTER NUMERICAL METHODS.3 POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn... c. For large values of n, polynomial equations like this one are difficult and time-consuming to solve. Moreover, numerical techniques for approximating roots of polynomial equations of high degree are sensitive to rounding errors. In this section you will look at an alternative method for approximating eigenvalues. As presented here, the method can be used only to find the eigenvalue of A that is largest in absolute value this eigenvalue is called the dominant eigenvalue of A. Although this restriction may seem severe, dominant eigenvalues are of primary interest in many physical applications. Definition of Dominant Eigenvalue and Dominant Eigenvector Let,,..., and n be the eigenvalues of an n n matrix A. is called the dominant eigenvalue of A if > i, i,..., n. The eigenvectors corresponding to are called dominant eigenvectors of A. Not every matrix has a dominant eigenvalue. For instance, the matrix A with eigenvalues of and has no dominant eigenvalue. Similarly, the matrix A with eigenvalues of,, and 3 has no dominant eigenvalue. EXAMPLE Finding a Dominant Eigenvalue Find the dominant eigenvalue and corresponding eigenvectors of the matrix A 5. Solution From Example 4 of Section 7. you know that the characteristic polynomial of A is 3. So the eigenvalues of A are and, of which the dominant one is. From the same example you know that the dominant eigenvectors of A those corresponding to are of the form x t 3, t.

The Power Method SECTION.3 POWER METHOD FOR APPROXIMATING EIGENVALUES 587 Like the Jacobi and Gauss-Seidel methods, the power method for approximating eigenvalues is iterative. First assume that the matrix A has a dominant eigenvalue with corresponding dominant eigenvectors. Then choose an initial approximation x of one of the dominant eigenvectors of A. This initial approximation must be a nonzero vector in R n. Finally, form the sequence given by x Ax x Ax A(Ax ) A x x 3 Ax A(A x ) A 3 x. x k Ax k A(A k x ) A k x. For large powers of k, and by properly scaling this sequence, you will see that you obtain a good approximation of the dominant eigenvector of A. This procedure is illustrated in Example. EXAMPLE Approximating a Dominant Eigenvector by the Power Method Complete six iterations of the power method to approximate a dominant eigenvector of A 5. Solution Begin with an initial nonzero approximation of x. Then obtain the following approximations. Iteration x Ax x Ax x 3 Ax x 4 Ax 3 x 5 Ax 4 x 6 Ax 5 5 5 5 8 5 64 4 4 8 64 36 46 5 36 46 8 94 5 8 94 568 9 Scaled Approximation.5 4..8..9. 46.96. 94.98. 9.99.

588 CHAPTER NUMERICAL METHODS Note that the approximations in Example appear to be approaching scalar multiples of 3, which you know from Example is a dominant eigenvector of the matrix In Example the power method was used to approximate a dominant eigenvector of the matrix A. In that example you already knew that the dominant eigenvalue of A was For the sake of demonstration, however, assume that you do not know the dominant eigenvalue of A. The following theorem provides a formula for determining the eigenvalue corresponding to a given eigenvector. This theorem is credited to the English physicist John William Rayleigh (84 99).. A 5. Theorem. Determining an Eigenvalue from an Eigenvector If x is an eigenvector of a matrix A, then its corresponding eigenvalue is given by Ax x x x. This quotient is called the Rayleigh quotient. Proof Because x is an eigenvector of A, you know that Ax x and can write Ax x x x x x x x x x x x. In cases for which the power method generates a good approximation of a dominant eigenvector, the Rayleigh quotient provides a correspondingly good approximation of the dominant eigenvalue. The use of the Rayleigh quotient is demonstrated in Example 3. EXAMPLE 3 Approximating a Dominant Eigenvalue Use the result of Example to approximate the dominant eigenvalue of the matrix A 5. Solution After the sixth iteration of the power method in Example, obtained x 6 568 9 9.99.. With x.99, as the approximation of a dominant eigenvector of A, use the Rayleigh quotient to obtain an approximation of the dominant eigenvalue of A. First compute the product Ax.

SECTION.3 POWER METHOD FOR APPROXIMATING EIGENVALUES 589 Then, because and Ax you can compute the Rayleigh quotient to be Ax x x x 5.99. 6.. Ax x 6..99.. x x.99.99 9.94,. 9.94. which is a good approximation of the dominant eigenvalue. From Example you can see that the power method tends to produce approximations with large entries. In practice it is best to scale down each approximation before proceeding to the next iteration. One way to accomplish this scaling is to determine the component of Ax i that has the largest absolute value and multiply the vector Ax i by the reciprocal of this component. The resulting vector will then have components whose absolute values are less than or equal to. (Other scaling techniques are possible. For examples, see Exercises 7 and 8.) EXAMPLE 4 The Power Method with Scaling Calculate seven iterations of the power method with scaling to approximate a dominant eigenvector of the matrix A 3. Use x,, as the initial approximation. Solution One iteration of the power method produces Ax 3 and by scaling you obtain the approximation x 5 3... 5.6 3 5,

59 CHAPTER NUMERICAL METHODS A second iteration yields Ax 3 and Continuing this process, you obtain the sequence of approximations shown in Table.6. TABLE.6 x.....6..45.45...... x x x x 3 x 4 x 5 x 6 x 7...6...45.45..48.55..5.5..5.49..5.5..5.5. From Table.6 you can approximate a dominant eigenvector of A to be Using the Rayleigh quotient, you can approximate the dominant eigenvalue of A to be (For this example you can check that the approximations of x and are exact.) 3. x.5.5.. REMARK: Note that the scaling factors used to obtain the vectors in Table.6, x x x 3 x 4 x 5 x 6 x 7 5...8 3.3 3..99 3., are approaching the dominant eigenvalue 3. In Example 4 the power method with scaling converges to a dominant eigenvector. The following theorem states that a sufficient condition for convergence of the power method is that the matrix A be diagonalizable (and have a dominant eigenvalue). Theorem.3 Convergence of the Power Method If A is an n n diagonalizable matrix with a dominant eigenvalue, then there exists a nonzero vector such that the sequence of vectors given by x Ax A A 3 A 4..., A k, x, x, x, x,... approaches a multiple of the dominant eigenvector of A.

SECTION.3 POWER METHOD FOR APPROXIMATING EIGENVALUES 59 Proof Because A is diagonalizable, you know from Theorem 7.5 that it has n linearly independent eigenvectors x, x,..., x n with corresponding eigenvalues of,,..., n. Assume that these eigenvalues are ordered so that is the dominant eigenvalue (with a corresponding eigenvector of x ). Because the n eigenvectors x, x,..., x n are linearly independent, they must form a basis for R n. For the initial approximation x, choose a nonzero vector such that the linear combination x c x c x... c n x n has nonzero leading coefficients. (If c, the power method may not converge, and a different x must be used as the initial approximation. See Exercises and.) Now, multiplying both sides of this equation by A produces Ax Ac x c x... c n x n c Ax c Ax... c n Ax n Repeated multiplication of both sides of this equation by A produces which implies that A k x c x c x... c n n x n. A k x c k x c k x... c n n k x n, k cx c k x... c n Now, from the original assumption that is larger in absolute value than the other eigenvalues it follows that each of the fractions n k x n., 3,..., n is less than in absolute value. So each of the factors k, 3 k,..., must approach as k approaches infinity. This implies that the approximation A k x k c x, c n k improves as k increases. Because x is a dominant eigenvector, it follows that any scalar multiple of x is also a dominant eigenvector, so showing that A k x approaches a multiple of the dominant eigenvector of A. The proof of Theorem.3 provides some insight into the rate of convergence of the power method. That is, if the eigenvalues of A are ordered so that > 3... n,

59 CHAPTER NUMERICAL METHODS then the power method will converge quickly if is small, and slowly if is close to. This principle is illustrated in Example 5. EXAMPLE 5 The Rate of Convergence of the Power Method (a) The matrix A 4 6 has eigenvalues of and. So the ratio is.. For this matrix, only four iterations are required to obtain successive approximations that agree when rounded to three significant digits. (See Table.7.) TABLE.7 5 5 x x x x 3 x 4...88..835..833..833. (b) The matrix A 4 7 has eigenvalues of and 9. For this matrix, the ratio is.9, and the power method does not produce successive approximations that agree to three significant digits until sixty-eight iterations have been performed, as shown in Table.8. TABLE.8 5 x x x x 66 x 67 x 68...5..94........75..74..74. In this section you have seen the use of the power method to approximate the dominant eigenvalue of a matrix. This method can be modified to approximate other eigenvalues through use of a procedure called deflation. Moreover, the power method is only one of several techniques that can be used to approximate the eigenvalues of a matrix. Another popular method is called the QR algorithm. This is the method used in most computer programs and calculators for finding eigenvalues and eigenvectors. The algorithm uses the QR factorization of the matrix, as presented in Chapter 5. Discussions of the deflation method and the QR algorithm can be found in most texts on numerical methods.

SECTION.3 EXERCISES 593 SECTION.3 EXERCISES In Exercises 6, use the techniques presented in Chapter 7 to find the eigenvalues of the matrix A. If A has a dominant eigenvalue, find a corresponding dominant eigenvector.. A. 4 5 3. A 4. 3 5. A 6. 3 In Exercises 7, use the Rayleigh quotient to compute the eigenvalue of A corresponding to the eigenvector x. 4 5 3 7. A 8. A 4, x 3 3, x 5 9. A 5 6 6 3. A 3 4 3 3, x 3 3 9 5, x 3 In Exercises 4, use the power method with scaling to approximate a dominant eigenvector of the matrix A. Start with x, and calculate five iterations. Then use x 5 to approximate the dominant eigenvalue of A.. A. A 7 4 6 3 3. A 4. A 8 In Exercises 5 8, use the power method with scaling to approximate a dominant eigenvector of the matrix A. Start with x,, and calculate four iterations. Then use x 4 to approximate the dominant eigenvalue of A. 5. A 6. 3 8 A 3 A 4 A 5 3 4 A 7. 8. A A 6 7 5 3 7 7 6 4 3 6 3 In Exercises 9 and, the matrix A does not have a dominant eigenvalue. Apply the power method with scaling, starting with x,,, and observe the results of the first four iterations. 9. A. A 3 6. Writing (a) Find the eigenvalues and corresponding eigenvectors of (b) Calculate two iterations of the power method with scaling, starting with x,. (c) Explain why the method does not seem to converge to a dominant eigenvector.. Writing Repeat Exercise using x,,, for the matrix 3. The matrix A has a dominant eigenvalue of implies that A x x. Observe that Ax x Apply five iterations of the power method (with scaling) on A to compute the eigenvalue of A with the smallest magnitude. 4. Repeat Exercise 3 for the matrix A A 3 A 3 5 3 4. 3... 5 6 3

594 CHAPTER NUMERICAL METHODS 5. (a) Compute the eigenvalues of A (b) Apply four iterations of the power method with scaling to each matrix in part (a), starting with x,. 5. (c) Compute the ratios for A and B. For which do you expect faster convergence? 6. Use the proof of Theorem.3 to show that AA k x A k x and B 3 4. for large values of k. That is, show that the scale factors obtained in the power method approach the dominant eigenvalue. In Exercises 7 and 8, apply four iterations of the power method (with scaling) to approximate the dominant eigenvalue of the matrix. After each iteration, scale the approximation by dividing by its length so that the resulting approximation will be a unit vector. 5 6 7. 8. A 7 A 6 4 3 8 4 9 4 6 5

594 CHAPTER NUMERICAL METHODS.4 APPLICATIONS OF NUMERICAL METHODS Applications of Gaussian Elimination with Pivoting In Section.5 you used least squares regression analysis to find linear mathematical models that best fit a set of n points in the plane. This procedure can be extended to cover polynomial models of any degree as follows. Regression Analysis for Polynomials The least squares regression polynomial of degree m for the points x, y, x, y,..., x n, y n is given by y a m x m a m x m... a x a x a, where the coefficients are determined by the following system of m linear equations. na x i a x i a... x im a m y i x i a x i a x i3 a... x m i a m x i y i x i a x i3 a x i4 a... x m i a m x i y i x im a x m i a x m i a.... x m i a m x im y i Note that if m this system of equations reduces to na x i a y i x i a x i a x i y i,

SECTION.4 APPLICATIONS OF NUMERICAL METHODS 595 which has a solution of a nx i y i x i y i n x i x i and a y i n a x i n. Exercise 6 asks you to show that this formula is equivalent to the matrix formula for linear regression that was presented in Section.5. Example illustrates the use of regression analysis to find a second-degree polynomial model. EXAMPLE Least Squares Regression Analysis The world population in billions for the years between 965 and, is shown in Table.9. (Source: U.S. Census Bureau) TABLE.9 Year 965 97 975 98 985 99 995 Population 3.36 3.7 4. 4.46 4.86 5.8 5.69 6.8 Find the second-degree least squares regression polynomial for these data and use the resulting model to predict the world population for 5 and. Solution Begin by letting x 4 represent 965, x 3 represent 97, and so on. So the collection of points is given by 4, 3.36, 3, 3.7,, 4.,, 4.46,, 4.86,, 5.8,, 5.69, 3, 6.8, which yields n 8, 8 x i 4, i 8 x i 44, i 8 x i3 64, i 8 x i4 45, i So the system of linear equations giving the coefficients of the quadratic model y a x a x a is 8a 4a 44a 37.55 4a 44a 64a.36 44a 64a 45a 9.86. Gaussian elimination with pivoting on the matrix 8 4 44 4 44 64 44 64 45 8 y i 37.55, i 37.55.36 9.86 8 x i y i.36, i 8 x i y i 9.86. i

596 CHAPTER NUMERICAL METHODS Figure. Given points 7 6 4 3 5 Predicted points 4 3 3 4 5 produces.4545.4545.4545.77.6.6 So by back substitution you find the solution to be a.45, a.3953, a 4.8667, and the regression quadratic is 4.3377.396.45. y.45x.3953x 4.8667. Figure. compares this model with the given points. To predict the world population for 5, let x 4, obtaining y.454.39534 4.8667 6.5 billion. Similarly, the prediction for x 5 is y.455.39535 4.8667 6.96 billion. EXAMPLE Least Squares Regression Analysis Find the third-degree least squares regression polynomial y a 3 x 3 a x a x a for the points,,,,, 3, 3,, 4,, 5,, 6, 4. Solution For this set of points the linear system na x i a x i a x i3 a 3 y i x i a x i a x i3 a x i4 a 3 x i y i x i a x i3 a x i4 a x i5 a 3 x i y i x i3 a x i4 a x i5 a x i6 a 3 x i3 y i becomes 7a a 9a 44a 3 4 a 9a 44a 75a 3 5 9a 44a 75a,a 3 4 44a 75a,a 67,7a 3 58.

SECTION.4 APPLICATIONS OF NUMERICAL METHODS 597 Using Gaussian elimination with pivoting on the matrix 7 9 44 produces 9 44 75 9 44 75, 44 75, 67,7 4 5 4 58 Figure. y 4 3 (, ) (, ) (6, 4) (, 3) (5, ) (3, ) (4, ) 3 4 5 6 x.... which implies that a 3.667, a.5, a 3.695, So the cubic model is 5.587... 7.6667 8.53.. 5.35 58.348 9.774. y.667x 3.5x 3.695x.74. Figure. compares this model with the given points..856.683.86.667, a.74. Applications of the Gauss-Seidel Method EXAMPLE 3 An Application to Probability Figure.3 Figure.3 is a diagram of a maze used in a laboratory experiment. The experiment begins by placing a mouse at one of the ten interior intersections of the maze. Once the mouse emerges in the outer corridor, it cannot return to the maze. When the mouse is at an interior intersection, its choice of paths is assumed to be random. What is the probability that the mouse will emerge in the food corridor when it begins at the ith intersection? 3 4 5 6 7 8 9 Food Solution Let the probability of winning (getting food) by starting at the ith intersection be represented by p i. Then form a linear equation involving p i and the probabilities associated with the intersections bordering the ith intersection. For instance, at the first intersection the mouse has a probability of 4 of choosing the upper right path and losing, a probability of 4 of choosing the upper left path and losing, a probability of 4 of choosing the lower rightpath (at which point it has a probability of p 3 of winning), and a probability of 4 of choos- ing the lower left path (at which point it has a probability of p of winning). So p 4 4 4 p 4 p 3. Upper Upper Lower Lower right left left right

598 CHAPTER NUMERICAL METHODS Using similar reasoning, the other nine probabilities can be represented by the following equations. p 5 5 p 5 p 3 5 p 4 5 p 5 p 3 5 5 p 5 p 5 p 5 5 p 6 p 4 5 5 p 5 p 5 5 p 7 5 p 8 p 5 6 p 6 p 3 6 p 4 6 p 6 6 p 8 6 p 9 p 6 5 5 p 3 5 p 5 5 p 9 5 p p 7 4 4 4 p 4 4 p 8 p 8 5 5 p 4 5 p 5 5 p 7 5 p 9 p 9 5 5 p 5 5 p 6 5 p 8 5 p p 4 4 4 p 6 4 p 9 Rewriting these equations in standard form produces the following system of ten linear equations in ten variables. 4p p p 3 p 5p p 3 p 4 p 5 p p 5p 3 p 5 p 6 p 5p 4 p 5 p 7 p 8 p p 3 p 4 6p 5 p 6 p 8 p 9 p 3 p 4 p 5 5p 6 4p 7 p 8 p 9 p p 4 p 5 p 7 5p 8 p 9 p 5 p 6 p 8 5p 9 p p 6 p 9 4p The augmented matrix for this system is 4 5 5 5 6 5 4 5 5 4.

SECTION.4 APPLICATIONS OF NUMERICAL METHODS 599 Using the Gauss-Seidel method with an initial approximation of produces (after 8 iterations) an approximation of p.9, p 3.8, p 5.333, p 7.455, p 9.5, p.8 p 4.98 p 6.98 p 8.5 p.455. p p... p The structure of the probability problem described in Example 3 is related to a technique called finite element analysis, which is used in many engineering problems. Note that the matrix developed in Example 3 has mostly zero entries. Such matrices are called sparse. For solving systems of equations with sparse coefficient matrices, the Jacobi and Gauss-Seidel methods are much more efficient than Gaussian elimination. Applications of the Power Method Section 7.4 introduced the idea of an age transition matrix as a model for population growth. Recall that this model was developed by grouping the population into n age classes of equal duration. So for a maximum life span of L years, the age classes are given by the following intervals. First age Second age nth age class class class, L n,...., The number of population members in each age class is then represented by the age distribution vector x x x n.. x Over a period of Ln years, the probability that a member of the ith age class will survive to become a member of the i th age class is given by p i, where p i, i,,..., n. The average number of offspring produced by a member of the ith age class is given by b i, where b i, i,,..., n. These numbers can be written in matrix form as follows. b b b 3... b n b n p... A p........... L n, L n, p n n ll, L n Number in first age class Number in second age class Number in nth age class

6 CHAPTER NUMERICAL METHODS Multiplying this age transition matrix by the age distribution vector for a given period of time produces the age distribution vector for the next period of time. That is, Ax i x i. In Section 7.4 you saw that the growth pattern for a population is stable if the same percentage of the total population is in each age class each year. That is, Ax i x i x i. For populations with many age classes, the solution to this eigenvalue problem can be found with the power method, as illustrated in Example 4. EXAMPLE 4 A Population Growth Model Assume that a population of human females has the following characteristics. Average Number of Probability of Age Class Female Children Surviving to (in years) During Ten-Year Period Next Age Class age <..985 age <.74.996 age < 3.78.994 3 age < 4.63.99 4 age < 5..975 5 age < 6..94 6 age < 7..866 7 age < 8..68 8 age < 9..36 9 age <.. Find a stable age distribution for this population. Solution The age transition matrix for this population is A..74.78.63.......985.996.994.99..975.94.866.68.36

SECTION.4 APPLICATIONS OF NUMERICAL METHODS 6 To apply the power method with scaling to find an eigenvector for this matrix, use an initial approximation of x,,,,,,,,,. The following is an approximation for an eigenvector of A, with the percentage of each age in the total population. x Percentage in Eigenvector Age Class Age Class. age < 5.7.95 age < 4.3.864 age < 3 3..86 3 age < 4.3.749 4 age < 5.44.686 5 age < 6.48.65 6 age < 7 9.4.49 7 age < 8 7.5.34 8 age < 9 4.8.6 9 age <.6 The eigenvalue corresponding to the eigenvector x in Example 4 is.65. That is, Ax A..95.864.86.749.686.65.49.34.6.65.985.9.859.798.73.645.54.334.3.65..95.864.86.749.686.65.49.34.6. This means that the population in Example 4 increases by 6.5% every ten years. REMARK: Should you try duplicating the results of Example 4, you would notice that the convergence of the power method for this problem is very slow. The reason is that the dominant eigenvalue of is only slightly larger in absolute value than the next largest eigenvalue..65

6 CHAPTER NUMERICAL METHODS Applications of Gaussian Elimination with Pivoting In Exercises 4, find the second-degree least squares regression polynomial for the given data. Then graphically compare the model to the given points... 3. 4. In Exercises 5 8, find the third-degree least squares regression polynomial for the given data. Then graphically compare the model to the given points. 5. 6. 7. 8. SECTION.4 9. Find the second-degree least squares regression polynomial for the points,, Then use the results to approximate cos 4. Compare the approximation with the exact value.. Find the third-degree least squares regression polynomial for the points 4,, 3,, 3, 3, EXERCISES,,,,,,,, 3,, 4,,,,, 3,, 4,, 5, 4,,,,, 6,, 3,,, 3,,,,, 3,, 4,, 5, 4,,,,, 4, 3,, 4,, 5,,,, 4, 3, 4, 5,, 6, 3, 4,,,,,,,, 5 7,, 3,,,,, 3, 4, 6,,,, Then use the result to approximate tan 6. Compare the approximation with the exact value.. The number of minutes a scuba diver can stay at a particular depth without acquiring decompression sickness is shown in the table. (Source: United States Navy s Standard Air Decompression Tables) Depth (in feet) 35 4 5 6 7 Time (in minutes) 3 6 5 Depth (in feet) 8 9 Time (in minutes) 4 3 5 3,,,. 3, 3, 4,. (a) Find the least squares regression line for these data. (b) Find the second-degree least squares regression polynomial for these data. (c) Sketch the graphs of the models found in parts (a) and (b). (d) Use the models found in parts (a) and (b) to approximate the maximum number of minutes a diver should stay at a depth of feet. ( The value given in the Navy s tables is 5 minutes.). The life expectancy for additional years of life for females in the United States as of 998 is shown in the table. (Source: U.S. Census Bureau) Current Age 3 4 Life Expectancy 7.6 6.8 5. 4.4 Current Age 5 6 7 8 Life Expectancy 3. 3.3 5.6 9. (a) Find the second-degree least squares regression polynomial for these data. (b) Use the result of part (a) to predict the life expectancy of a newborn female and a female of age years. 3. Total sales in billions of dollars of cellular phones in the Unites States from 99 to 999 are shown in the table. (Source: Electronic Market Data Book). Year 99 993 994 995 996 997 998 999 Sales.5.6.8.57.66.75.78.8 (a) Find the second degree least squares regression polynomial for the data. (b) Use the result of part (a) to predict the total cellular phone sales in 5 and. (c) Are your predictions from part (b) realistic? Explain. 4. Total new domestic truck unit sales in hundreds of thousands in the United States from 993 to are shown in the table. (Source: Ward s Auto info bank) Year 993 994 995 996 997 998 999 Trucks 5.9 6. 6.6 6.48 6.63 7.5 7.9 8.9 (a) Find the second degree least squares regression polynomial for the data.