Chapter 3: Root Finding. September 26, 2005

Size: px
Start display at page:

Download "Chapter 3: Root Finding. September 26, 2005"

Transcription

1 Chapter 3: Root Finding September 26, 2005

2 Outline 1 Root Finding The Bisection Method Newton s Method: Derivation and Examples How To Stop Newton s Method Application: Division Using Newton s Method The Newton Error Formula The Secant method: Derivation and Examples Fixed-Point Iteration

3 Calculating the roots of an arbitrary equation f (x) = 0 is a common problem in applied mathematics. We will explore some simple numerical methods (algorithms) for solving this equation and possible difficulties that may arise.

4 Example If we borrow L dollars at an annual interest rate of r for a period of m years, the size of monthly payment M is given by the equation L = 12M (1 (1 + r r 12 ) 12m ). Suppose we ant to to take out a mortgage of $150,000 and that we can afford a monthly payment of $1,000 and the amortization period of 20 years, what is the maximum interest rate we can afford? Then, 150, 000 = 12 1, 000 (1 (1 + r r 12 ) ) and we need to solve this equation for r. For that, we need a numerical method.

5 This is a very simple method, based on the continuity of the function and the Intermediate Value Theorem. If f (x) is continuous on [a, b] and f (a)f (b) < 0 then f (x) changes sign on [a, b] so it must be the case that f (x) = 0 for at least one x [a, b]. The simplest numerical procedure for finding a root is to repeatedly halve the interval [a, b], keeping the half on which f (x) changes sign.

6 Suppose c is the midpoint of [a, b]: c = 1 (a + b). 2 Then, we have three possibilities for the product f (a)f (c): 1 f (a)f (c) < 0. So, f changes sign between a and c and there is a root in that interval. 2 f (a)f (c) = 0. In that case, since we are assuming that f (a) 0, f (c) = 0 and we have found a root. 3 f (a)f (c) > 0, so a root must be in the other half [c, b].

7 Example Find the largest root of the equation accurate to within ɛ = x 6 x 1 = 0 4 y x 2 4 Figure: f (x) = x 6 x 1

8 So, the root we are looking for is in the interval [1, 2]. a = 1, b = 2, f (a) = 1, f (2) = 61. c = = 3 2 = , f (1.5000) = Since f (c) > 0, we know that the root is between a and c Halve the interval, by taking: Now, c = a := 1, b := c = = , f (1.2500) =

9 Since f (a)f (c) < 0, we halve the interval in the following way a := 1, b := c = Again, c = = , f (c) = Since f (a)f (c) > 0, the half of the interval we use in the next step is a := c = , b := We continue in the same fashion...

10 a b c b c f (c) We will take x = as the approximation of the root.

11 How did we know when to stop? Answer: If the half-length b c of the interval [a, b] where the approximation lies is less than ɛ, this means that c will be at the distance from the actual root of no more than ɛ, so we can assume that x c

12 Bisection Method (Rough Algorithm) Given an initial interval [a 0, b 0 ] = [a, b], set k = 0 and proceed as follows: 1 Compute 2 If f (c k+1 )f (a k ) < 0, set c k+1 = a k (b k a k ). 3 If f (c k+1 )f (b k ) < 0, set a k+1 = a k, b k+1 = c k+1 4 Update k and go to Step 1. b k+1 = b k, a k+1 = c k+1

13 Theorem (Convergence and Error) Let [a 0, b 0 ] = [a, b] be the initial interval, with f (a)f (b) < 0. Define the approximate root as x n = c n = b n 1 + a n 1. 2 Then, there exists a root α [a, b] such that α x n ( 1 2 )n (b a). Moreover, to achieve the accuracy of α x n ɛ, it suffices to take n log(b a) log ɛ. log 2

14 Obviously, b n a n = 1 2 (b n 1 a n 1 ) By induction, b n a n = ( 1 2 )n (b 0 a 0 ) = ( 1 2 )n (b a). Then, α x n 1 2 (b n 1 a n 1 ) = 1 2 (1 2 )n 1 (b 0 a 0 ) = ( 1 2 )n (b 0 a 0 )

15 Since we want to have α x n ( 1 2 )n (b a), ( 1 2 )n (b a) ɛ If we solve this latter inequality for n, we get n log(b a) log ɛ. log 2

16 Example In our previous example, a = 1, b = 2, ɛ = Then, we need the following number of steps to reach the desired accuracy: n log(2 1) log(0.001) log 2 = log(0.001) log 2 =

17 Bisection Method input a,b,eps external f fa = f(a) fb = f(b) if fa*fb > 0 then stop n = fix((log(b-a) - log(eps))/log(2)) + 1 for i =1 to n do c = a + 0.5*(b-a) fc = f(c) if fa*fc < 0 then b = c fb = fc

18 else if fa*fc > 0 then a = c fa = fc else alpha = c return endif endif endfor

19 Bisection Method is a global method: it always converges to the root, no matter how far we start from the root (assuming f (a)f (b) < 0) Disadvantages: 1 a relatively slow method (compared to the other methods we will study) 2 it cannot determine those roots where the function is tangent to the x-axis (example: the solution to x 2 = 0)

20 Newton s Method classic algorithm for solving equations of the form f (x) = 0. much faster than Bisection Method, but it has its own limitations (we will investigate them later)

21 Motivation Let us start by considering the solution to In that case, f (x) = x 2 2. x 2 2 = 0. Clearly: the solution is α = 2. We will try to construct a sequence x n of numbers such that lim x n = α = 2. n

22 x Figure: y = x 2 2 For x 0, we can take an arbitrary value (a good idea is to pick something not too far away from the actual solution) Next, find where the tangent line at (x 0, f (x 0 )) intersects the x-axis, call that point x 1.

23 If we take, e.g. x 0 = 2, then the equation of the tangent line at that point is: f (2) = y f (2) x 2 which yields 4 = y 2 x 2 To find the intersection with the x-axis, set y = 0: We get x = 3/2, so set 4 = 2 x 2. x 1 = 3 2. In the next iteration, we construct the tangent line at (x 1, f (x 1 )), and its intersection with the x-axis will be x 2, etc.

24 We want a general formula for x n+1 in terms of x n. The equation of the tangent line at (x n, f (x n ) is f (x n ) = y f (x n) x x n. To find x n+1, we set y = 0 since it is the x-intercept of that tangent line: f (x n ) = f (x n) x x n. Solving this equation for x, we get: x n+1 := x = x n f (x n) f (x n ). Therefore, the formula for Newton s Method is x n+1 = x n f (x n) f (x n )

25 It is also possible to derive this formula using Taylor s Theorem. Expand f into a linear polynomial around x n : Set f(x)=0: f (x) = f (x n ) + (x x n )f (x n ) (x x n) 2 f (ξ). 0 = f (x n ) + (x x n )f (x n ) (x x n) 2 f (ξ). Assuming that the remainder is very small when we are sufficiently close to the solution, we have: 0 f (x n ) + (x x n )f (x n ), and when we solve for x, we get x = x n f (x n) f (x n )

26 Example Using Newton s Method, solve the equation x 6 x 1 = 0. First, f (x) = 6x 5 1. We use the recursive formula for Newton s Method to generate the first few approximations, starting with x 0 = 1.5: x n+1 = x n x 6 n x n 1 6x 5 n 1 For example, and x 1 = (1.5) 5 1 = x 2 = ( ) ( ) 5 1 =

27 We get the table n x n f (x n ) x n x n 1 α x n E E E E E E E E E E E E E E E E E E E-9 The actual solution is α = and x 6 equals α to nine significant digits Notice that Newton s Method may converge slowly at first, but as the iterates come closer to the root, the convergence accelerates significantly.

28 Newton s Method is not a global method; there are examples for which the convergence will be poor, or ven those for which we will not be able to achieve convergence. Sometimes, this can be rectified by changing the initial approximation x 0, but we may need to take a value for x 0 to be very close to the actual solution α.

29 Example Consider the function f (x) = 20x 1. 19x 2 y x 1 2 Figure: y = 20x 1 19x which has a root at α = 1/20 = 0.05.

30 Except for initial values x 0 very close to α, the first step will move away from the actual solution, instead of convergence. Here are the values of x 1 for a few initial choices x 0 : x 0 x Therefore, we need to use x 0 = 1/16 in order to get x 1 > 0.

31 Example If f (x) = arctan x, then arctan x = 0 has α = 0 as its only root. Suppose x 0 = Then, and x 1 = x 2 = which means that x 0 = x 2, and the values start oscillating and not converging to α.

32 General Fact: (which will be explained later) If f, f, f are all continuous near the root, and if f does not equal zero at the root, then Newton s Method will converge whenever the initial approximation is sufficiently close to the root. Moreover, this convergence is very fast, with the number of correct digits roughly doubling in each iteration.

33 For the Bisection Method, we were able to determine, before we started the algorithm, how many iterations were necessary to achieve a given desired accuracy ɛ. For Newton s Method, this is generally not possible. Question: When do we stop the iteration?

34 We want the error α x n to be less than ɛ. Problem: Generally, we do not know the exact value of the root α. If f is not zero near the root, we will be able to control the size of the error.

35 Error Estimate for Newton s Method By the Mean Value Theorem: f (α) f (x n ) = f (c n )(α x n ), where c n is some value between α and x n. Solve for α x n to get: α x n = f (α) f (x n) f (c n ) = f (x n) f (c n ) = f (x n 1) f (x n ) f (x n 1 ) f (x n 1 ) = (x n x n 1 )( f (x n) f (x n 1 ) f (x n 1 ) f (c n ) f (x n 1 ) f (c n ) Here, in the last step, we have used the formula for x n in terms of x n 1.

36 Now, α x n = C n (x n x n 1 ) for C n = f (x n )/f (x n 1 ) f (x n 1 )/f (c n ) So, the error α x n is a multiple of a computable quantity x n x n 1 Assuming that the method converges and that f (α) 0, then lim n C n = 1 The last limit is true, since both lim f (x n ) n f (x n 1 ) = lim f (x n 1 ) n f (c n ) = 1 Finally, lim n α x n x n x n 1 = 1.

37 Then, we can use a computable quantity x n x n 1 to measure convergence. In practice, this means that we can stop the algorithm when some relatively small multiple of this quantity is smaller than ɛ. For instance, when 3 x n x n 1 ɛ Another possibility is to stop the algorithm when f (x N ) becomes sufficiently small (close to zero). This is explained in Exercise 2 in 3.3.

38 Division Problem Consider the function Obviously, if f (α) = 0, then f (x) = a 1 x. α = 1 a. Therefore, if we want to divide, say c, by a, we can use c a = c 1 a, where we have computed 1/a from f (x). By changing the value of a (a 0), we are able to divide any two numbers.

39 If we apply Newton s Method to f (x), we get x n+1 = x n ( a x n 1 xn 2 ) = x n (2 ax n ). Question: When does the iteration converge and how fast? What initial guesses x 0 will work?

40 We have a = b 2 t p, where b [1/2, 1]. Therefore, we only need to compute reciprocals for the numbers in [1/2, 1]. We will define the residual by Then, the error is r n = 1 ax n α x n = e n = r n a = 1 a x n

41 In that case, we can get a recursive formula for r n : r n+1 = 1 ax n+1 = 1 a[x n (2 ax n )] = 1 a[x n (1 + r n )] = 1 (1 r n )(1 + r n ) = r 2 n From this, by induction: r n = r 2n 0. If r 0 < 1, then r n will rapidly approach zero. The error e n in the n-th step will satisfy e n+1 = ae 2 n = ae 2 n and the relative error will satisfy so that we get e n+1 α = (e n α )2 e n α = (e 0 α )2n = (r 0 ) 2n

42 So, we get the convergence as long as which is equivalent to r 0 < 1 0 < x 0 < 2 a = 2α

43 Question: How do we get this initial approximation x 0? One way to do that is to use linear interpolation: approximate the function y = 1/x with the straight line connecting the endpoints of the curve on [1/2, 1]: (1/2, 2) and (1, 1). The equation of the line passing through these two points is: y = p 1 (x) = 3 2x We can assume that It can be shown that x 0 = p 1 (a) = 3 2a 1 a p 1(a) 1 2.

44 So, Then, x 0 α 1 2. r 0 = a e 0 = a In terms of the relative error, we will have e 1 0 α 2 α 1 2. Therefore, only six iterations will produce relative accuracy of

45 Example We will test this method on a = 0.8, i.e.we are trying to approximate (0.8) 1 = The initial guess is Newton s Method produces x 0 = p 1 (0.8) = 3 2(0.8) = 1.4 x 1 = x 0 (2 (0.8)x 0 ) = x 2 = x 1 (2 (0.8)x 1 ) = x 3 = x 2 (2 (0.8)x 2 ) = x 4 = x 3 (2 (0.8)x 3 ) = 1.25

46 Example Instead of a linear, we can use some quadratic interpolant to generate the initial approximation x 0. For example, we can use p 2 (x) = 1 3 (8x 2 18x + 13) This parabola still passes through the endpoints (1/2, 2) and (1, 1) and is one approximation of the curve y = 1/x on [1/2, 1].

47 How can we estimate the error if we use this p 2 (x) in place of p 1 (x)? Define the error Then, E(x) = 1 x p 2(x) E (x) = 1 x 2 1 (16x 18) 3 We want to maximize E(x), so let s find the critical points: 0 = 1 x 2 1 (16x 18) 3 Therefore, 16x 3 18x = 0

48 The easiest way to solve this cubic equation is to use a computer algebra system (e.g. Maple); the solutions are: x 1 = 0.886, x 2 = 0.595, x 3 = Only x 1 and x 2 are in the interval [1/2, 1], so we only need to check those in addition to the endpoints: E(x 1 ) = E(x 2 ) = E(1/2) = 0 E(1) = 0

49 Therefore, E(x) is the upper bound for the error on [1/2, 1] Thus, the initial error is e 0 = 1 a p 2(a) which is better than generating the initial approximation using a linear interpolant. We get the accuracy of in only four iterations.

50 We saw that the error at each step of Newton s Method is related to the square of the previous error. In other words, the number of correct significant digits roughly doubles at each step. Theorem (Newton Error Formula) If f has continuous derivatives f and f on some interval I and f (α) = 0 for some α I, and x n+1 = x n f (x n) f (x n ), then there exists a point ξ n between α and x n such that (α x n+1 ) = 1 2 (α x n) 2 f (ξ n ) f (x n ).

51 Proof. Expand f in a Taylor linear polynomial around x n : f (x) = f (x n ) + (x x n )f (x n ) (x x n) 2 f (ξ n ), where ξ n is between x n and α. Substitute x = α into the expansion: 0 = f (x n ) + (α x n )f (x n ) (α x n) 2 f (ξ n ). If we divide both sides by f (x n ) and rewrite the equality, we get: (x n α) f (x n) f (x n ) = 1 2 (α x n) 2 f (ξ n ) f (x n ). Since the left-hand side is equal to x n+1 α, we get x n+1 α = 1 2 (α x n) 2 f (ξ n ) f (x n ).

52 Example If f is such that f (x) 3 for all x and f (x) 1 for all x and if the initial error in Newton s method is less than 1 2, what is an upper bound on the error at each of the first three steps? Solution: We are given that and that Then, for every n, α x 0 < 1 2, f (x) 3, f (x) 1. f (ξ n ) f (x n ) 3 1 = 3.

53 Now, α x 1 = 1 2 α x 0 2 f (ξ 0 ) f (x 0 ) < 1 2 (1 2 )2 3 = 3 8 = Next, we estimate the error of the second iteration α x 2 = 1 2 α x 1 2 f (ξ 1 ) f (x 1 ) < 1 2 (3 8 )2 3 = =

54 Finally, we give an upper bound for the error of the third iteration: α x 3 = 1 2 α x 2 2 f (ξ 2 ) f (x 2 ) < 1 2 ( )2 3 = =

55 The formula for the error estimate shows that the error in each step decreases quadratically. If our method is convergent then we can assume that lim x n = α, n f (x n ) f (α) f (x n ) f (α). Then, (α x n+1 ) C(α x n ) 2, where C = 1 f (α) 2 f (α). This last approximation can help us decide how we can choose the initial approximation x 0 to guarantee that Newton s Method will converge to α.

56 A fact from 3.6 which will not be proved (but whose justification involves the facts from the previous slide) states: Theorem Suppose f has continuous derivatives f and f and f (α) = 0. Define the ratio M = max x R f (x) 2 min x R f (x) (< ). Then, for any initial approximation x 0 such that α x 0 < 1 M Newton s Method converges to α.

57 Example It is easy to show (by graphing the function, say) that the equation 4x cos x = 0 has a solution in the interval [ 2, 2]. How should we choose the initial approximation relative to the actual solution α in order to have convergence of the Newton s method? We can consider the first two derivatives of f (x) = 4x cos x: Then, f (x) = 4 + sin x, f (x) = cos x. max f (x) = max cos x = 1 min f (x) = min 4 + sin x = 4 + ( 1) = 3

58 Therefore, M = 1 3, and in order to have convergence, we must have α x 0 < = 3.

59 We have seen that where (α x n+1 ) C(α x n ) 2, C = 1 f (α) 2 f (α). If we do have convergence, then α x n+1 lim n (α x n ) 2 = f (α) 2f (α) = C. which shows that the errors decrease quadratically

60 There is a more precise way to state this mathematically: Definition (Order of Convergence for Sequences) If x n is a sequence converging to α, so that α x n+1 lim n (α x n ) p = C for some nonzero constant C and some p, then p is called the order of convergence of the sequence.

61 Secant Method Newton s Method was based on approximating the graph of y = f (x) with a tangent line and then using the root of this line to approximate the root of f (x). Secant Method is a variation of this method. Instead of starting with one initial approximation x 0, and constructing the tangent line at that point, we start with two initial approximations x 0 and x 1 and construct a line through the points (x 0, f (x 0 )) and (x 1, f (x 1 )), which is called a secant line The intersection of this line with the x-axis will give us the next approximation x 2.

62 The equation of the line through (x 0, f (x 0 )), (x 1, f (x 1 )) is y = f (x 1 ) + (x x 1 ) f (x 1) f (x 0 ) x 1 x 0. The intersection of this line with the x-axis is: x 2 := x = x 1 f (x 1 ) x 1 x 0 f (x 1 ) f (x 0 ). Now, construct a secant line through (x 1, f (x 1 )) and (x 2, f (x 2 )), and its intersection with the x-axis will be the next iteration x 3, etc.

63 Repeating this process, we get the general formula for teh Secant Method, which is a recursive formula for x n+1 in terms of x n and x n 1 : x n+1 = x n f (x n ) x n x n 1 f (x n ) f (x n 1 )

64 There is an alternative way of deriving this formula, by modifying the formula for Newton s Method. The formula for Newton s Method is: x n+1 = x n f (x n) f (x n ). We can approximate the derivative in the denominator using the backward difference formula, where x h = x n 1 : Then, we get: f (x n ) = f (x n) f (x n 1 ) x n x n 1. x n+1 = x n f (x n ) x n x n 1 f (x n ) f (x n 1 )

65 Example Solve the equation using the Secant Method. x 6 x 1 = 0, The formula for the Secant method applied to the function is f (x) = x 6 x 1, x n+1 = x n (xn 6 x n x n 1 x 1) xn 6 x n xn x. n 1 We can choose the two initial approximations to be, e.g. x 0 = 2, x 1 = 1.

66 n x n f (x n ) x n x n 1 α x n E E E E E E E E E E E E E E E E E E E E E-7 As with the Newton s Method, the initial iterates do not converge rapidly, but as the iterates become closer to α, the speed of convergence increases.

67 Error Estimate Using techniques from calculus, one can show that α x n+1 = (α x n )(α x n 1 )[ f (ξ n ) 2f (ζ n ) ], where ζ n is between x n and x n 1, and ξ n is between the largest and the smallest of the numbers α, x n and x n 1. One can also show that: α x n+1 lim n α x n r = f (α) 2f (α) r 1 = C, where r = So, Secant method is slower than Newton s Method since its convergence is of order 1.62 (it is superlinear), while Newton s Method converges quadratically.

68 Secant Method: Algorithm input x0, x1, tol, n external f f0 = f(x0) f1 = f(x1) for i = 1 to n do x = x1 - f1*(x1 - x0)/(f1 - f0) fx = f(x) x0 = x1 x1 = x f0 = f1 f1 = fx if abs(x1 - x0) < tol then root = x1 stop endif endfor

69 Fixed-Point Iteration General method for one-point iteration formulas (such as Newton s method, for example) Example As our motivating example, consider the equation which has the root x 2 5 = 0 α = 5 =

70 Let s consider these four iteration methods for solving this equation: 1 x n+1 = 5 + x n x 2 n 2 x n+1 = 5 x n 3 x n+1 = 1 + x n 1 5 x 2 n 4 x n+1 = 1 2 (x n + 5 x n ) All four iterations have the property that if the sequence x n converges, it converges to α.

71 n x n (I1) x n (I2) x n (I3) x n (I4) In all four iterative methods (I1)-(I4), we used the same initial approximation x 0 = 2.5 So, we see that (I1) and (I2) do not converge, while the methods (I3) and (I4) converge to α.

72 All four iteration methods have the form x n+1 = g(x n ), for some continuous function g(x). If the iterates converge to the root α, then lim x n+1 = lim g(x n ) n n α = g(α) Thus, α is a solution of the equation x = g(x) and is called a fixed point of the function g(x)

73 Theorem (Fixed point existence and convergence) Suppose g is a continuous function on [a, b] such that a g(x) b, for all x [a, b]. Then: 1 g has at least one fixed point α on [a, b] 2 If there is a constant γ < 1 such that max x [a,b] g (x) = γ for all x, y [a, b], then α is unique and the iteration method x n+1 = g(x n ) converges to α for any initial approximation x 0 in [a, b].

74 Theorem (continued...) Also: The error estimate is: α x n γn 1 γ x 1 x 0. The convergence is at least linear: α x n+1 lim n α x n = g (α).

75 Sketch of proof: 1 Define the function It is continuous on [a, b] and f (x) = x g(x). f (a) 0, f (b) 0, so it must have a root α [a, b], by the Intermediate Value Theorem. 2 Suppose there are two solutions, α and β, in [a, b]: g(α) g(β) = g (ξ)(α β) for some ξ between α and β, and since g (x) < 1: g(α) g(β) < α β. On the other hand, since α and β are solutions which is a contradiction. α β = g(α) g(β),

76 The derivation of the error estimate in (3) is slightly more involved so we will omit it. Problem: In general, this theorem is rarely used directly. Reason: It is difficult to find an interval [a, b] on which the values of g are in the same interval. Theorem (Local Convergence for Fixed-Point methods) If g is continuous around α and g (α) < 1, then the fixed-point method will converge with all the conclusions (1)-(3) from the previous theorem on some interval [a, b] around α. In other words, x 0 has to be chosen sufficiently close to α in order to get the convergence of the fixed-point method.

77 Let us go back to the four methods we looked at at the beginning of this section and see why some of them converged, while the others did not. (I1) g(x) = 5 + x x 2, g (x) = 1 2x, g (α) = < 1 So, this method will not converge to 5. (I2) g(x) = 5/x, g (x) = 5/x 2, g (α) = 1 The theorem will not tell us what happens in this case, but from the table of results, we saw that the method did not converge (oscillates).

78 (I3) g(x) = 1 + x 1 5 x 2, g (x) = x, g (α) = = The iteration will converge and when x n is close to α. α x n α x n (I4) g(x) = 1 2 (x + 5/x), g (x) = 1 2 (1 5/x 2 ), g (α) = 0 Thus, the conditions for convergence are met. This is, in fact, Newton s method for this equation.

79 General Problem: Given an equation how should we rewrite it as f (x) = 0 g(x) = x in order to get a convergent fixed-point method. Answer: Try to set up g(x) in such a way that its derivative is smaller than 1 in a neighbourhood of the expected root.

80 x Example Given the function g(x) = 1 + e x find an interval [a, b] such that g([a, b]) [a, b]. Use this to solve the fixed point problem Solution: x = 1 + e x 4 y x 2 4

81 So, one such interval is [0, 2], for example. The derivative is g (x) = e x. Obviously, on [0, 2], so that 1 g (x) e 2 g (x) 1 on that interval. Therefore, the fixed-point problem will have a solution in [1, 2]. We can take as the initial approximation x 0 = 1.5 (for example).

82 x n+1 = 1 + e xn, x 0 = 1.5 n x n And we can continue calculating successive iterates, until we reach a satisfactory accuracy...

83 We see that various single-point iteration methods, including the Newton s Method, are special cases of this general fixed-point method. We mentioned that the fixed-point methods are at least linear in convergence, while we had seen that Newton s Method is quadratically convergent. Theorem Consider the fixed-point iteration x n+1 = g(x n ), where g has p continuous derivatives, and g(α) = α. If g (α) = g (α) =... = g (p 1) (α) = 0, but g (p) (α) 0, then the fixed-point method converges with order p for x 0 sufficiently close to α.

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0. 3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)

More information

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

THE SECANT METHOD. q(x) = a 0 + a 1 x. with THE SECANT METHOD Newton s method was based on using the line tangent to the curve of y = f (x), with the point of tangency (x 0, f (x 0 )). When x 0 α, the graph of the tangent line is approximately the

More information

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy, Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

X. Numerical Methods

X. Numerical Methods X. Numerical Methods. Taylor Approximation Suppose that f is a function defined in a neighborhood of a point c, and suppose that f has derivatives of all orders near c. In section 5 of chapter 9 we introduced

More information

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable. Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written

More information

Zeros of Functions. Chapter 10

Zeros of Functions. Chapter 10 Chapter 10 Zeros of Functions An important part of the mathematics syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

Solutions of Equations in One Variable. Newton s Method

Solutions of Equations in One Variable. Newton s Method Solutions of Equations in One Variable Newton s Method Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011 Brooks/Cole,

More information

Section 4.2: The Mean Value Theorem

Section 4.2: The Mean Value Theorem Section 4.2: The Mean Value Theorem Before we continue with the problem of describing graphs using calculus we shall briefly pause to examine some interesting applications of the derivative. In previous

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

CHAPTER 10 Zeros of Functions

CHAPTER 10 Zeros of Functions CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems

More information

Nonlinear Equations. Chapter The Bisection Method

Nonlinear Equations. Chapter The Bisection Method Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

Numerical Methods Lecture 3

Numerical Methods Lecture 3 Numerical Methods Lecture 3 Nonlinear Equations by Pavel Ludvík Introduction Definition (Root or zero of a function) A root (or a zero) of a function f is a solution of an equation f (x) = 0. We learn

More information

FIXED POINT ITERATION

FIXED POINT ITERATION FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

Solving Non-Linear Equations (Root Finding)

Solving Non-Linear Equations (Root Finding) Solving Non-Linear Equations (Root Finding) Root finding Methods What are root finding methods? Methods for determining a solution of an equation. Essentially finding a root of a function, that is, a zero

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: April 16, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Root Finding (and Optimisation)

Root Finding (and Optimisation) Root Finding (and Optimisation) M.Sc. in Mathematical Modelling & Scientific Computing, Practical Numerical Analysis Michaelmas Term 2018, Lecture 4 Root Finding The idea of root finding is simple we want

More information

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Solution of Nonlinear Equations

Solution of Nonlinear Equations Solution of Nonlinear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 14, 017 One of the most frequently occurring problems in scientific work is to find the roots of equations of the form f(x) = 0. (1)

More information

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions Computational Methods CMSC/AMSC/MAPL 460 Solving nonlinear equations and zero finding Ramani Duraiswami, Dept. of Computer Science Where does it arise? Finding zeroes of functions Solving functional equations

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Bisection and False Position Dr. Marco A. Arocha Aug, 2014 Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Given function f, we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the function f Problem is known as root

More information

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS Math 473: Practice Problems for Test 1, Fall 011, SOLUTIONS Show your work: 1. (a) Compute the Taylor polynomials P n (x) for f(x) = sin x and x 0 = 0. Solution: Compute f(x) = sin x, f (x) = cos x, f

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0 8.7 Taylor s Inequality Math 00 Section 005 Calculus II Name: ANSWER KEY Taylor s Inequality: If f (n+) is continuous and f (n+) < M between the center a and some point x, then f(x) T n (x) M x a n+ (n

More information

V. Graph Sketching and Max-Min Problems

V. Graph Sketching and Max-Min Problems V. Graph Sketching and Max-Min Problems The signs of the first and second derivatives of a function tell us something about the shape of its graph. In this chapter we learn how to find that information.

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

APPLICATIONS OF DIFFERENTIATION

APPLICATIONS OF DIFFERENTIATION 4 APPLICATIONS OF DIFFERENTIATION APPLICATIONS OF DIFFERENTIATION 4.8 Newton s Method In this section, we will learn: How to solve high degree equations using Newton s method. INTRODUCTION Suppose that

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014 Root Finding: Close Methods Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Roots Given function f(x), we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the

More information

Intro to Scientific Computing: How long does it take to find a needle in a haystack?

Intro to Scientific Computing: How long does it take to find a needle in a haystack? Intro to Scientific Computing: How long does it take to find a needle in a haystack? Dr. David M. Goulet Intro Binary Sorting Suppose that you have a detector that can tell you if a needle is in a haystack,

More information

Determining the Roots of Non-Linear Equations Part I

Determining the Roots of Non-Linear Equations Part I Determining the Roots of Non-Linear Equations Part I Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term

More information

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Math 471. Numerical methods Root-finding algorithms for nonlinear equations Math 471. Numerical methods Root-finding algorithms for nonlinear equations overlap Section.1.5 of Bradie Our goal in this chapter is to find the root(s) for f(x) = 0..1 Bisection Method Intermediate value

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Section 3.1 Extreme Values

Section 3.1 Extreme Values Math 132 Extreme Values Section 3.1 Section 3.1 Extreme Values Example 1: Given the following is the graph of f(x) Where is the maximum (x-value)? What is the maximum (y-value)? Where is the minimum (x-value)?

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

Root Finding Convergence Analysis

Root Finding Convergence Analysis Root Finding Convergence Analysis Justin Ross & Matthew Kwitowski November 5, 2012 There are many different ways to calculate the root of a function. Some methods are direct and can be done by simply solving

More information

Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang)

Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang) Numerical Analysis EE, NCKU Tien-Hao Chang (Darby Chang) 1 In the previous slide Error (motivation) Floating point number system difference to real number system problem of roundoff Introduced/propagated

More information

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations Roots of Equations Direct Search, Bisection Methods Regula Falsi, Secant Methods Newton-Raphson Method Zeros of Polynomials (Horner s, Muller s methods) EigenValue Analysis ITCS 4133/5133: Introduction

More information

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

MA 8019: Numerical Analysis I Solution of Nonlinear Equations MA 8019: Numerical Analysis I Solution of Nonlinear Equations Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw

More information

Scientific Computing. Roots of Equations

Scientific Computing. Roots of Equations ECE257 Numerical Methods and Scientific Computing Roots of Equations Today s s class: Roots of Equations Polynomials Polynomials A polynomial is of the form: ( x) = a 0 + a 1 x + a 2 x 2 +L+ a n x n f

More information

CHAPTER-II ROOTS OF EQUATIONS

CHAPTER-II ROOTS OF EQUATIONS CHAPTER-II ROOTS OF EQUATIONS 2.1 Introduction The roots or zeros of equations can be simply defined as the values of x that makes f(x) =0. There are many ways to solve for roots of equations. For some

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Numerical Methods 5633

Numerical Methods 5633 Numerical Methods 5633 Lecture 4 Michaelmas Term 2017 Marina Krstic Marinkovic mmarina@maths.tcd.ie School of Mathematics Trinity College Dublin Marina Krstic Marinkovic 1 / 17 5633-Numerical Methods Root

More information

Goals for This Lecture:

Goals for This Lecture: Goals for This Lecture: Learn the Newton-Raphson method for finding real roots of real functions Learn the Bisection method for finding real roots of a real function Look at efficient implementations of

More information

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2 Bisection Method Given continuous function f (x) on the interval [a, b] with f (a) f (b) < 0, there must be a root in (a, b). To find a root: set [a 1, b 1 ] = [a, b]. set p 1 = a 1+b 1 2 and compute f

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State University

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes Jim Lambers MAT 460 Fall Semester 2009-10 Lecture 2 Notes These notes correspond to Section 1.1 in the text. Review of Calculus Among the mathematical problems that can be solved using techniques from

More information

Chapter 2 Solutions of Equations of One Variable

Chapter 2 Solutions of Equations of One Variable Chapter 2 Solutions of Equations of One Variable 2.1 Bisection Method In this chapter we consider one of the most basic problems of numerical approximation, the root-finding problem. This process involves

More information

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1 ath 651 Introduction to Numerical Analysis I Fall 2010 SOLUTIONS: Homework Set 1 1. Consider the polynomial f(x) = x 2 x 2. (a) Find P 1 (x), P 2 (x) and P 3 (x) for f(x) about x 0 = 0. What is the relation

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter IV: Locating Roots of Equations Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu

More information

Numerical Solution of f(x) = 0

Numerical Solution of f(x) = 0 Numerical Solution of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@pdx.edu ME 350: Finding roots of f(x) = 0 Overview Topics covered in these slides

More information

Math 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD

Math 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD Mathematical question we are interested in numerically answering How to find the x-intercepts of a function f (x)? These x-intercepts are called the roots of the equation f (x) = 0. Notation: denote the

More information

Limits and Their Properties

Limits and Their Properties Chapter 1 Limits and Their Properties Course Number Section 1.1 A Preview of Calculus Objective: In this lesson you learned how calculus compares with precalculus. I. What is Calculus? (Pages 42 44) Calculus

More information

Order of convergence

Order of convergence Order of convergence Linear and Quadratic Order of convergence Computing square root with Newton s Method Given a > 0, p def = a is positive root of equation Newton s Method p k+1 = p k p2 k a 2p k = 1

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

Math 261 Calculus I. Test 1 Study Guide. Name. Decide whether the limit exists. If it exists, find its value. 1) lim x 1. f(x) 2) lim x -1/2 f(x)

Math 261 Calculus I. Test 1 Study Guide. Name. Decide whether the limit exists. If it exists, find its value. 1) lim x 1. f(x) 2) lim x -1/2 f(x) Math 261 Calculus I Test 1 Study Guide Name Decide whether the it exists. If it exists, find its value. 1) x 1 f(x) 2) x -1/2 f(x) Complete the table and use the result to find the indicated it. 3) If

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

Math 117: Calculus & Functions II

Math 117: Calculus & Functions II Drexel University Department of Mathematics Jason Aran Math 117: Calculus & Functions II Contents 0 Calculus Review from Math 116 4 0.1 Limits............................................... 4 0.1.1 Defining

More information

CHAPTER 4 ROOTS OF EQUATIONS

CHAPTER 4 ROOTS OF EQUATIONS CHAPTER 4 ROOTS OF EQUATIONS Chapter 3 : TOPIC COVERS (ROOTS OF EQUATIONS) Definition of Root of Equations Bracketing Method Graphical Method Bisection Method False Position Method Open Method One-Point

More information

Newton s Method and Linear Approximations

Newton s Method and Linear Approximations Newton s Method and Linear Approximations Newton s Method for finding roots Goal: Where is f (x) =0? f (x) =x 7 +3x 3 +7x 2 1 2-1 -0.5 0.5-2 Newton s Method for finding roots Goal: Where is f (x) =0? f

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter IV: Locating Roots of Equations Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu

More information

Figure 1: Graph of y = x cos(x)

Figure 1: Graph of y = x cos(x) Chapter The Solution of Nonlinear Equations f(x) = 0 In this chapter we will study methods for find the solutions of functions of single variables, ie values of x such that f(x) = 0 For example, f(x) =

More information

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27, Math 371 - Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, 2011. Due: Thursday, January 27, 2011.. Include a cover page. You do not need to hand in a problem sheet.

More information

APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL

APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR Jay Villanueva Florida Memorial University Miami, FL jvillanu@fmunivedu I Introduction II III IV Classical methods A Bisection B Linear interpolation

More information

CHAPTER 2 POLYNOMIALS KEY POINTS

CHAPTER 2 POLYNOMIALS KEY POINTS CHAPTER POLYNOMIALS KEY POINTS 1. Polynomials of degrees 1, and 3 are called linear, quadratic and cubic polynomials respectively.. A quadratic polynomial in x with real coefficient is of the form a x

More information

Announcements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook!

Announcements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook! Announcements Topics: - sections 5.2 5.7, 6.1 (extreme values) * Read these sections and study solved examples in your textbook! Homework: - review lecture notes thoroughly - work on practice problems

More information

Solutions to Math 41 First Exam October 18, 2012

Solutions to Math 41 First Exam October 18, 2012 Solutions to Math 4 First Exam October 8, 202. (2 points) Find each of the following its, with justification. If the it does not exist, explain why. If there is an infinite it, then explain whether it

More information

Computational Methods. Solving Equations

Computational Methods. Solving Equations Computational Methods Solving Equations Manfred Huber 2010 1 Solving Equations Solving scalar equations is an elemental task that arises in a wide range of applications Corresponds to finding parameters

More information

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations. MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of f(x) = 0, their advantages and disadvantages. Convergence

More information

Halley s Method: A Cubically Converging. Method of Root Approximation

Halley s Method: A Cubically Converging. Method of Root Approximation Halley s Method: A Cubically Converging Method of Root Approximation Gabriel Kramer Brittany Sypin December 3, 2011 Abstract This paper will discuss numerical ways of approximating the solutions to the

More information

14 Increasing and decreasing functions

14 Increasing and decreasing functions 14 Increasing and decreasing functions 14.1 Sketching derivatives READING Read Section 3.2 of Rogawski Reading Recall, f (a) is the gradient of the tangent line of f(x) at x = a. We can use this fact to

More information

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions Applied Mathematical Sciences, Vol 1, 018, no 3, 137-146 HIKARI Ltd, wwwm-hikaricom https://doiorg/101988/ams018811 A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions Somkid Intep

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Math 4329: Numerical Analysis Chapter 03: Fixed Point Iteration and Ill behaving problems. Natasha S. Sharma, PhD

Math 4329: Numerical Analysis Chapter 03: Fixed Point Iteration and Ill behaving problems. Natasha S. Sharma, PhD Why another root finding technique? iteration gives us the freedom to design our own root finding algorithm. The design of such algorithms is motivated by the need to improve the speed and accuracy of

More information

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods So welcome to the next lecture of the 2 nd unit of this

More information

Chapter 8: Taylor s theorem and L Hospital s rule

Chapter 8: Taylor s theorem and L Hospital s rule Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))

More information

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Taylor and Maclaurin Series. Approximating functions using Polynomials. Taylor and Maclaurin Series Approximating functions using Polynomials. Approximating f x = e x near x = 0 In order to approximate the function f x = e x near x = 0, we can use the tangent line (The Linear

More information

Chapter 3: The Derivative in Graphing and Applications

Chapter 3: The Derivative in Graphing and Applications Chapter 3: The Derivative in Graphing and Applications Summary: The main purpose of this chapter is to use the derivative as a tool to assist in the graphing of functions and for solving optimization problems.

More information

Caculus 221. Possible questions for Exam II. March 19, 2002

Caculus 221. Possible questions for Exam II. March 19, 2002 Caculus 221 Possible questions for Exam II March 19, 2002 These notes cover the recent material in a style more like the lecture than the book. The proofs in the book are in section 1-11. At the end there

More information

1.1: The bisection method. September 2017

1.1: The bisection method. September 2017 (1/11) 1.1: The bisection method Solving nonlinear equations MA385/530 Numerical Analysis September 2017 3 2 f(x)= x 2 2 x axis 1 0 1 x [0] =a x [2] =1 x [3] =1.5 x [1] =b 2 0.5 0 0.5 1 1.5 2 2.5 1 Solving

More information

Newton s Method and Linear Approximations 10/19/2011

Newton s Method and Linear Approximations 10/19/2011 Newton s Method and Linear Approximations 10/19/2011 Curves are tricky. Lines aren t. Newton s Method and Linear Approximations 10/19/2011 Newton s Method Goal: Where is f (x) =0? f (x) =x 7 +3x 3 +7x

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation

More information

MATH1131/1141 Calculus Test S1 v5a

MATH1131/1141 Calculus Test S1 v5a MATH3/4 Calculus Test 008 S v5a March 9, 07 These solutions were written and typed up by Johann Blanco and Brendan Trinh and edited by Henderson Koh, Vishaal Nathan, Aaron Hassan and Dominic Palanca. Please

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

Newton s Method and Linear Approximations

Newton s Method and Linear Approximations Newton s Method and Linear Approximations Curves are tricky. Lines aren t. Newton s Method and Linear Approximations Newton s Method for finding roots Goal: Where is f (x) = 0? f (x) = x 7 + 3x 3 + 7x

More information

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking MADHAVA MATHEMATICS COMPETITION, December 05 Solutions and Scheme of Marking NB: Part I carries 0 marks, Part II carries 30 marks and Part III carries 50 marks Part I NB Each question in Part I carries

More information