Chapter 2 Solutions of Equations of One Variable 2.1 Bisection Method In this chapter we consider one of the most basic problems of numerical approximation, the root-finding problem. This process involves finding a root, or solution, of an equation of the form f(x) = 0, for a given function f. A root of this equation is also called a zero of the function f. BisectionTechnique The first technique, based on the Intermediate Value Theorem, is called the Bisection, or Binary-search, method. Suppose f is a continuous function defined on the interval [a, b], with f (a) and f (b) of opposite sign. The Intermediate Value Theorem implies that a number p exists in (a, b) with f ( p) = 0. Although the procedure will work when there is more than one root in the interval (a, b), we assume for simplicity that the root in this interval is unique. The method calls for a repeated halving (or bisecting) of subintervals of [a, b] and, at each step, locating the half containing p. To begin, set a 1 = a and b 1 = b, and let p 1 be the midpoint of [a, b]; that is, p 1 = a 1 + (b 1 a 1 )/2 = (a 1 + b 1 )/2 If f ( p 1 ) = 0, then p = p 1, and we are done. If f ( p 1 ) = 0, then f ( p 1 ) has the same sign as either f (a 1 ) or f (b 1 ). If f ( p 1 ) f (a 1 ) 0 (f ( p 1 ) and f (a 1 ) have the same sign), p ( p 1, b 1 ). Set a 2 = p 1 and b 2 = b 1. If f ( p 1 ) f (a 1 ) <0 (f ( p 1 ) and f (a 1 ) have opposite signs), p (a 1, p 1 ). Set a 2 = a 1 and b 2 = p 1. Then reapply the process to the interval [a 2, b 2 ]. This produces the method described in Algorithm 2.1. 1
Example 1: Use the Bisection Method to find p 3 for ( ) on [0, 1]. Solution: Using the Bisection method gives a 1 = 0 and b 1 = 1, so f(a 1 ) = f(0) = -1 and f(b 1 ) = f(1) = 1 - cos(1) = 0.4597. We have p 1 = (a 1 +b 1 )/2 = ½ and f(p 1 ) = f(1/2) = - 0.17048 <0 Since f(a 1 ) <0 and f(p 1 ) <0, f(a 1 ) f(p 1 ) 0, then set a 2 = p 1 = 0.5 and b 2 = b 1 = 1. Thus f(a 2 ) = f(p 1 ) = -0.17048 <0 and f(b 2 ) = f(1) = 0.4597 0, and then p 2 = (a 2 +b 2 )/2 = 0.75. Since f(p 2 ) = f(0.75) = 0.13434 0, f(a 2 ) f(p 2 ) < 0, then set a 3 = a 2 = 0.5 and b 3 = p 2 = 0.75, so that P 3 = (0.5 + 0.75)/2 = 1.25/2 = 0.625. 2
Algorithm 2.1: Other stopping procedures can be applied in Step 4 of Algorithm 2.1 or in any of the iterative techniques in this chapter. For example, we can select a tolerance ε 0 and generate p 1,..., p N until one of the following conditions is met: 3
Difficulties can arise using any of these stopping criteria. For example, 1) there are sequences {p 1, p 2,...} with the property that the differences p n p n 1 converge to zero while the sequence itself diverges. (See the following example) Example (Ex. 17): Define a sequence {p n } by. Show that ( ), even though the sequence {p n } diverges. Solution: Since, we have ( ). However, is the nth partial sum of the divergent harmonic series. (.) The harmonic series is example of a series whose terms go to zero, but not rapidly enough to produce a convergent series. There are many proofs of the divergence of this series, any calculus text should give at least two. One proof will simply analyze the partial sums of the series and another is based on the Integral Test. The point of this problem is not the fact that this particular sequence diverges, it is that a test for an approximate solution to a root based on the condition that is small should always be suspect. Consecutive terms of a sequence might be close to each other, but not sufficiently close to the actual solution you are seeking. 2) It is also possible for f(p n ) to be close to zero while p n differs significantly from p. (See the following example) Example (Ex. 16): Let ( ) ( ), p=1, and, Show that ( ) whenever n1 but that requires that n1000. Solution: For, 4
( ) ( ) ( ) ( ), but. Without additional knowledge about f or p, Inequality (2.2) is the best stopping criterion to apply because it comes closest to testing relative error. When using a computer to generate approximations, it is good practice to set an upper bound on the number of iterations. This eliminates the possibility of entering an infinite loop, a situation that can arise when the sequence diverges (and also when the program is incorrectly coded). This was done in Step 2 of Algorithm 2.1 where the bound N 0 was set and the procedure terminated if i N 0. Note that to start the Bisection Algorithm, an interval [a, b] must be found with f (a) f (b) < 0. At each step the length of the interval known to contain a zero of f is reduced by a factor of 2; hence it is advantageous to choose the initial interval [a, b] as small as possible. For example, if f (x) = 2x3 x2 + x 1, we have both f ( 4) f (4) < 0 and f (0) f (1) < 0, so the Bisection Algorithm could be used on [ 4, 4] or on [0, 1]. Starting the Bisection Algorithm on [0, 1] instead of [ 4, 4] will reduce by 3 the number of iterations required to achieve a specified accuracy. 5
The following example illustrates the Bisection Algorithm. The iteration in this example is terminated when a bound for the relative error is less than 0.0001. This is ensured by having Example: (Example 1, p. 50) Show that ( ) has a root in [1, 2], and use the Bisection method to determine an approximation to the root that is accurate to at least within. Solution: Since ( ) is continuous on [1, 2] and ( ) and ( ) the Intermediate Value Theorem ensures that this function has a root in [1, 2]. Using Bisection method: For the first iteration, and, set, ( ) ( ). ( ) ( ), then set, this indicates that we should select the interval [1, 1.5] for the second iteration. Set ( ) ( ). ( ) ( ), then set, this indicates that we should select the interval [1.25, 1.5] for the third iteration. Set, Continuing in this manner gives the values in Table 2.1. After 13 iterations, p 13 = 1.365112305 approximates the root p with an error Since, we have, 6
so the approximation is correct to at least within. The correct value of p to nine decimal places is p = 1.365230013. Note that p 9 is closer to p than is the final approximation p 13. You might suspect this is true because f ( p 9 ) < f ( p 13 ), but we cannot be sure of this unless the true answer is known. Table 2.1: ------------------------------------------------------------------------------------------------------- The following is not needed This is because For the 14 th iteration, ( ), ( ) ( ), then set, and let, ------------------------------------------------------------------------------------------------------- 7
The following Theorem gives a bound for approximation error (this bound might be quite conservative). In the previous example, this bound applied to the problem ensures only that but the actual error is much smaller Example: (Example 2, p. 52) Determine the number of iterations necessary to solve ( ) accuracy using a 1 = 1 and b 1 = 2. Solution: (Example 2, p. 52) with We will use logarithms to find an integer N that satisfies 8
Use base-10 logarithms because the tolerance is given as a power of 10 (any base will work). Hence, ten iterations will ensure an approximation accurate to within actual error is much smaller.. Note: the Using Maple: To use Maple with Bisection Method: Load the NumericalAnalysis package with the command which gives access to the procedures in the package. Define the function with and use Maple returns Note that the value that is output is the same as p 8 in Table 2.1. The sequence of bisection intervals can be output with the command and Maple returns the intervals containing the solution together with the solution 9
The stopping criterion can also be based on relative error by choosing the option Now Maple returns The option output = plot given in Produces the following figure 10
11