Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,

Size: px
Start display at page:

Download "Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,"

Transcription

1 Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27, Include a cover page. You do not need to hand in a problem sheet. Write the problems in order and indicate clearly where to look to see any code or output you have attached. Always clearly label all plots (title, x-label, y-label, and legend). Use the subplot command from MATLAB when comparing 2 or more plots to make comparisons easier and to save paper. Include all of your script. A print out of your script should be included in the write up you submit and any m files should be ed to me as well. Place a comment at the top of each function or script that you submit which includes the name of the function or script, your name, the date, and MATH 371. For this assignment you will need to write three MATLAB programs. One for the bisection method, one for Newtons Method and one for the Secant Method. You will use these programs to carry out several parts of the problems below (Bisection on 1 and 5, Newton s on 4 and 5, and Secant on 5 and 8). I will give you code for Newton s Method and a shell of a program for the Bisection Method. You may also write code other problems if you like (Problem 3 part b, Problem 10) or you may do that by hand with the assistance of a calculator. 1

2 Problem 1 Write a MATLB routine that carries out the Bisection Method for finding a root. A shell for your MATLAB code is included below. Then use that code to find the roots of the following functions using an error tolerance of Discuss briefly if the error results agree with the predicted order of convergence discussed in class. (a) x 4 x 1 = 0. Include evidence that you have found all the roots (a plot or a theoretical verification). For this problem there are 2 roots. f(x) = x 4 x 1. I know this by looking at the graph of From this plot we can see that there are two roots, one positive and one negative. To find the first root I ran the bisection method with the initial interval of [1, 1.5], 15 steps were required to reach the desired accuracy. n c n error estimate e e e e-06 2

3 To find the second root I used an interval of [-1, -.5], which again took 15 iterations to achieve the desired accuracy. n c n error estimate e e e e-06 3

4 (b) Find the smallest nonzero positive root of x = tan(x). Include a graph that demonstrates the location of the root. (This will help in choosing a and b.) For this problem we begin by looking at the graph of y = x and y = tan(x). I have included two of these graphs one near x=4 and the other near x = 100 so that you can see the roots closest to these points. For the first root a suitable interval [a, b] is [3.8, 4.5]. You must be careful to note that the the graphs may appear to have other roots due to the fact that asymptotes are drawn. However the first root is close to 4. The output is then: n c n error estimate e-05 y=x y=tan(x)

5 (c) Find the root of x = tan(x) which is closest to 100. Include a graph that demonstrates the location of the root. (This will help in choosing a and b.) For the second root a suitable interval [a, b] is [98.9, 98.96]. The output is then: n c n error estimate e-05 y=x y=tan(x)

6 Problem 2 Let α be the unique solution of x = x 4 Write a function that has a root equal to α. Find an interval [a, b] containing α and for which the bisection method will converge to α. Then estimate the number of iterates needed to find α within an accuracy of Do not actually carry out the Bisection method for this problem For this problem we will consider the function f(x) = x 3 1+x 4. First we find an interval [a, b] for which f(a) and f(b) are opposite signs. One possible choice would be [0, 2] as f(0) = 3 and f(2) = 31/17 To estimate the number of iterates needed for the bisection method to produce an approximate c n such that c n α < we use the fact that c n α b a 2 n+1. Thus we need to choose n so that: (b a) 2 n+1 = n+1 = 2 n Thus we need that 10 8 /5 2 n or 8 ln(10) ln(5) ln(2) be at least 25 for this to be satisfied. n, computing this we see that n must 6

7 Problem 3 (a) Show that Newton s Method for f(x) = x m a reduces to the iterative scheme: x n+1 = 1 ( ) a m (x n ) + (m 1)x m 1 n. (b) Using the iterative scheme and an initial guess of 1 compute m 2 for m = 3, 4, 5, 6, 7 and 8, each to within an accuracy of You may do this either writing MATLAB code or any other computing device you desire. Include the iterates for each of the six calculations. If we use the function x m a and construct the Newtons method scheme we obtain that the iteration formula is: x n+1 = x n f(x n) f (x n ) = x n (x n) m a m(x n = m(x n) m (x n ) m + a m 1 m(x n ) m 1 = 1 ( ) a m (x n ) + (m 1)x m 1 n. We now perform Newtons method several times each for a different value of m. In each we set a = 2. For each value of m I have used 1 as my starting value. m = 3 n x n error estimate e e-10 m = 5 n x n error estimate e e-09 m = 7 n x n error estimate e e-09 m = 4 n x n error estimate e e-10 m = 6 n x n error estimate e e-09 m = 8 n x n error estimate e e-09 7

8 Problem 4 Bradie, p. 105, #11:The function f(x) = 27x x 3 180x x 7 has a zero at x = 1/3. Perform ten iterations of Newton s method on this function, starting with p 0 = 0. What is the apparent order of convergence of the sequence of approximations? What is the multiplicity of the zero at x = 1/3? Would the sequence generated by the bisection method converge faster? n p(n) e(n) ln e(n)/ln e(n-1) e(n)/e(n-1) The apparent order of convergence is linear with asymptotic rate constant The Bisection method converges faster. (Why?) Problem 5 Before doing this problem you should write programs that implement Newton s Method, the Bisection Method, and the Secant Method. Your code should be properly commented and the m files should be ed to me as part of your assignment. (a) Suppose f(x) is differentiable on [a, b]. Discuss how you might use a rootfinding method to identify a local extremum of f(x) inside [a, b]. Roots of the derivative give possible locations of local extrema. (b) Let f(x) = log x sin x. Prove that f(x) has a unique maximum in the interval [4, 6]. (Note that log means natural logarithm.) f (4) > 0, f (6) < 0, and the continuity of f (x) give the existence of at least one maximum. f (x) < 0 for x [4, 6] shows the maximum is unique. (c) Approximate this local maximum using six iterations of the Bisection Method with starting interval [4, 6]. (see table) (d) Approximate this local maximum using six iterations of the two fixed-point methods (Secant and Newton). For Newton s Method, use p 0 = 4. For the Secant Method, use p 0 = 6 and p 1 = 4. 8

9 (see table) (e) What is your best estimate for p, the location of the maximum? (f) Compare the three algorithms using the following two tables. Use your best estimate for p when comparing the error for the second table. Please do not write the tables by hand. n Bisection Secant Newton Table 2: Absolute error p n p versus iteration number n n Bisection Secant Newton e e e e e (g) On the same set of axes plot the log of the absolute error against the log of n for all four methods. Use n as the independent variable and log(e n ) as the dependent variable, plot each method in a different color. What does this indicate about the convergence in this case? 9

10 5 0 Plot of the Log of the errors of each method Bisection Secant Newton 5 Log of the absolute value of the error n iterations (h) What happens if you attempt to approximate the maximum by starting Newton s Method with p 0 = 6? The method converges to a root outside [4, 6]. 10

11 Problem 6 Do Problem 1 from section 2.3 and use the results to answer Problem 4 from the same section. (a) Given that sequence {p n } converges linearly to the fixed point p, we know that g(p) = p and g(p n ) = p n+1. By the intermediate value theorem we have that: p n+1 p p n p = g(p n) g(p) p n p = g (c n ) where c n is between p n and p. Since p n p as n then c n p as n as well. Thus for n large we have that: Thus, e n+1 e n = p n+1 p p n p p n p n 1 p n 1 p n 2 = (p n p) (p n 1 p) (p n 1 p) (p n 2 p) = e n e n 1 e n 1 e n 2 e n 1 e n 1 = (e n/e n 1 1) (1 e n 2 /e n 1 ) (g (p) 1) (1 g (p) 1 ) = g (p), g (p) where we have used that e n /e n 1 g (p) and e n 2 /e n 1 g (p) 1. This proves the desired result. (b) Recall that when a fixed point iteration scheme converges linearly, it has the asymptotic error constant λ = g (p), Thus e n g (p)e n 1, or e n 1 e n /g (p). Now, e n = p p n = p n p n 1 + p n 1 p = p n p n 1 + e n 1 Substituting e n 1 e n /g (p) into this last expression, solving for e n, and taking the absolute value gives e n g (p) g (p) 1 p n p n 1 This completes the proof. 11

12 Problem 7 For each of the following decided if the indicated iteration scheme will converge to the indicated α, provided x 0 is chosen sufficiently close to α. If it does converge, determine the convergence order. (a) x n+1 = 15x2 n 24x n x n, α = 1 We consider the function g(x) = (15x 2 24x + 13)/(4x). It is easy to check that g(1) = ( )/4 = 1 So 1 is indeed a fixed point. Also, g (x) = 15/4 13/(4x 2 ) (b) and 0 < g (1) = 1/2 < 1. Thus the iteration scheme converges linearly. Remember it is important that g (1) is not only non-zero but it is also less then 1. The asymptotic rate constant is then λ = g (1) = 1/2 x n+1 = x3 n + 6x n 3x 2 n + 2, α = 2 We consider the function g(x) = x3 +6x, and g( 2) = = 2, thus 2 is 3x indeed a fixed point. Moreover, g (x) = (3x2 + 2)(3x 2 + 6) (x 3 + 6x)(6x) (3x 2 + 2) 2 = 3(x2 2) 2 (3x 2 + 2) 2, g ( 2) = 0, thus we know that the sequence will converge since g ( 2) = 0 < 1, the order of convergence is at least 2. g (x) = (3x2 + 2) 2 (6)(x 2 2)2x 3(x 2 2) 2 (2)(3x 2 + 2)(6x) (3x 2 + 2) 4 = 96x(x2 2) (3x 2 + 2) 3, thus g ( 2) = 0, and the order of convergence is at least 3. g (x) = 96(9x4 36x 2 + 4) (3x 2 + 2) 4, g ( 2) = 3/4 so the order of convergence is exactly 3. Moreover, the asymptotic error constant is λ = g ( 2) 3! =

13 Problem 8 (a) Show that if x n converges with order α, then lim n log e n log e n 1 = α. Let This implies that or or or e n e n 1 α = L. ( ) en log = log(l), e n 1 α log( e n ) α log( e n 1 ) = log(l), log( e n ) log(l) = α log( e n 1 ) log( e n ) log e n 1 log(l) log e n 1 = α. Now, by assumption lim L = λ and since lim e n = 0 then lim log e n 1 = and thus lim log(l)/ log e n 1 = 0. Consequently, we have: ( log( en ) lim n log e n 1 log(l) ) = α. log e n 1 or ( ) ( log( en ) lim n log e n 1 ) log(l) 0 lim = α. n log e n 1 (b) Carry out seven iterations of the Secant Method for the function f(x) = 3 1 and include a table with the following columns: x n x n e n log( e n ) log( e n 1 ) n x n e n = 1 x 3 n log e n / log e n NAN

14 What can you say about the order of convergence of the Secant Method in this case based on this table and the calculation from (a)? Based on the table once can conclude that at least for this example the Secant Method is converging with and order of convergence somewhere between 1 and 2. The number should be (1 + 5)/ however, what occurs here is that you achieve 1/3 to all digits of accuracy storable by the computer before the ratio reaches this value. Problem 9 Problem 11 from Section 2.3 (Problems 12 and 13 are also quite similar). Consider the function g(x) = e x2. (a) Prove that g has a unique fixed point on the interval [0, 1]. Let g(x) = e x2. We will proceed by showing that g is continuous on [0, 1], maps [0, 1] to [0, 1] and there exists a k < 1 such that g (x) k for all x [0, 1]. First, note that g is the composition of the functions e x and x 2, both of which are continuous on [0, 1]. Consequently, g(x) = e x2 is continuous on [0, 1]. Next, we see that g (x) = 2xe x2 < 0 for all x (0, 1). Hence, g is decreasing on (0, 1). Combining this fact with g(0) = 1 and g(1) = e , it then follows that for x [0, 1], g(x) [e 1, 1] [0, 1]. Finally, we find g (x) = (4x 2 2)e x2 = 0 when x = 2/2. Because g (0) = 0, g ( 2 2 ) = 2e 1/ and g (1) = 2e , we find that g (x) 2e 1/2 for all x [0, 1]. Thus, we take k = 2e 1/2. Having established that g is continuous on [0, 1], maps [0, 1] to [0, 1] and there exists a k < 1 such that g (x) k for all x [0, 1], we conclude that g has a unique fixed point on the interval [0, 1]. (b) With a starting approximation of p 0 = 0, use the iteration scheme p n = e p2 n 1 to approximate the fixed point on [0, 1] to within

15 With a starting approximation of p 0 = 0 and a convergence tolerance of ɛ = , fixed point iteration using g(x) = e x2 p 1 = 1 p 10 = p 20 = p 128 = with an error estimate of This was using the value of p = as the actual root calculated in Mathematica. I you use the error estimate p n p n 1 to approximate the error then you will reach error less than at at p 133. (c) Use the theoretical error bound p n p kn 1 k p 1 p 0 to obtain a theoretical bound on the number of iterations needed to approximate the fixed point to within How does the number of iterations performed in part (b) compare with the theoretical bound? In part (a) we found k = 2e 1/2. With p 0 = 0, it follows that p 1 = g(p 0 ) = e 0 = 1. Solving the equation k n 1 k p 1 p for n yields n 152.3, or, since n must be an integer, n 153. As these calculations were carried out using k = max x [0,1] g (x), we see that the upper bound on the number of iterations needed to guarantee an absolute error less than is n = 153. In part (b), we found that only 128 iterations were needed to achieve the prescribed level of accuracy, confirming the theoretical upper bound. (d) Accelerate the convergence of the sequence obtained in part (a) using Aitken s 2 method. By how much has Aitken s 2 -method reduced the asymptotic error constant? 15

16 When carrying out the iteration above it appears that the method converges linearly with an asymptotic rate constant of λ = If we perform Aitken s 2 -method then we get the table below where the last row corresponds to n=53. The right most column indicates that the asymptotic rate constant has decreased to approximately λ = p n e n for p n λ approximation NAN

17 Newton s Method Code function newtonsimple(x0, error_bd, max_iterate) %%%%%% x0 is the initial guess. %%%%%% error_bd, is the accuracy we seek, we stop the iterations if the %%%%%% error is < error_bd. %%%%%% max_iterate is the maximum number of times we will carry out the %%%%%% iterations. If the error bound is not reached in this many %%%%%% iterations the program will halt and display an error message. format long g error=1; d=1; lerror=1; it_count =0; %%%%%% Above we have initialized the variable "error" which will be %%%%%% updated with the error at each step of the loop. We have also %%%%%% initialized the variable lerror, which will be the error from the %%%%%% previous step. while abs(error)>error_bd & it_count < max_iterate %Stops the while loop if the error bound is met %or the number of iterates (counted by %it_count) is more then the prescribed amount. fx=f(x0); dfx=deriv_f(x0); %Sets fx and dfx equal to f and f evaluated at %the current x value. if dfx==0; disp( The derivative is zero. Stop ) %Displays an error if the derivative is 0. return end x1=x0-fx/dfx; %error=x1-x0 error=actualroot-x0 %%%%%%%% When the actual root is not known you should use the first error %%%%%%%% line. In this case the error is e_n= actual root - x(n) is %%%%%%%% approximated by x(n+1)-x(n). The second error line should be used %%%%%%%% when the actual root is known ahead of time. The error is e_n= %%%%%%%% actual root - x(n). 17

18 Table(it_count+1,1)=it_count; Table(it_count+1,2)=x0; Table(it_count+1,3)=fx; Table(it_count+1,4) =error; Table(it_count+1,5)=error/d; Table(it_count+1,6)=error/d^2; Table(it_count+1,7)=error/d^3; %%%%%%%% The first column of the matrix Table, is n. The second column is %%%%%%%% x_n the third column is f(x_n), the fourth column is e_n, the %%%%%%%% fifth is x_n/x_n-1, the sixth is x_n/(x_(n-1))^2 and the final %%%%%%%% column is x_n/(x_(n-1))^3. These columns are useful in %%%%%%%% identifying the order of convergence of the series produced. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% d=error; x0=x1; it_count=it_count+1; %%%%%% This saves the error value for use during the next iteration, %%%%%% updates x0 to x1 and updates the counter as well. end if it_count>=max_iterate disp( The number of iterates calculated exceeded ) disp( max_iterate. An accurate root was not ) disp( calculated. ) %%%%%%% This displays an error message if the error bound is not reached %%%%%%% in the maximum allowed iterations. else format long root = x1 %%%%%%%% Displays the root. end format long g Table=Table(:, :) %%%%%%%% Displays the table. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function value= f(x) value=3-x^2; %%%% Defines the function f to be used. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function actualroot=actualroot actualroot=sqrt(3); 18

19 %%%%% Defines the actual root, this part can be commented out if not using %%%%% a function where you know the root. %%%%%%%%%%%%%%%%%%%%%%%%%%% function value =deriv_f(x) value= -2.*x; %%%%% Defines the derivative of the function f(x) used above. 19

20 Secant Method Code function secant(x0,x1, error_bd, max_iterate) %%%%%% x0 and x1 are the initial iterates. %%%%%% error_bd, is the accuracy we seek, we stop the iterations if the %%%%%% error is < error_bd. %%%%%% max_iterate is the maximum number of times we will carry out the %%%%%% iterations. If the error bound is not reached in this many %%%%%% iterations the program will halt and display an error message. format long g error=1; d=1; it_count =0; fx0=f(x0); %%%%%% Above we have initialized the variable "error" which will be %%%%%% updated with the error at each step of the loop. We have also %%%%%% initialized the variable lerror, which will be the error from the %%%%%% previous step. We have also defined fx0 to be f evaluated at x0 while abs(error)>error_bd & it_count < max_iterate %Stops the while loop if the error bound is met %or the number of iterates (counted by %it_count) is more then the prescribed amount. fx1=f(x1); %Sets fx0 and fx1 equal to f evaluated at %the current x value. if (fx0-fx1)==0; disp( The derivative is zero. Stop ) %Displays an error if the derivative is 0. return end x2=x1-fx1*((x1-x0)./(fx1-fx0)); %error=x1-x0; error=actualroot-x0; %%%%%%%% When the actual root is not known you should use the first error %%%%%%%% line. In this case the error is e_n= actual root - x(n) is %%%%%%%% approximated by x(n+1)-x(n). The second error line should be used %%%%%%%% when the actual root is known ahead of time. The error is e_n= 20

21 %%%%%%%% actual root - x(n). Table(it_count+1,1)=it_count; Table(it_count+1,2)=x0; Table(it_count+1,3)=fx0; Table(it_count+1,4)=abs(error); Table(it_count+1,5)=abs(error)/abs(d); Table(it_count+1,6)=abs(error)/(abs(d)^(1.62)); Table(it_count+1,7)=abs(error)/d^2; %%%%%%%% The first column of the matrix Table, is n. The second column is %%%%%%%% x_n the third column is f(x_n), the fourth column is e_n, the %%%%%%%% fifth is x_n/x_n-1, the sixth is x_n/(x_(n-1))^(1.62) and the final %%%%%%%% column is x_n/(x_(n-1))^2. These columns are useful in %%%%%%%% identifying the order of convergence of the series produced. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% d=error; x0=x1; x1=x2; fx0=fx1; it_count=it_count+1; %%%%%% This saves the error value for use during the next iteration, %%%%%% updates x0 to x1 and updates the counter as well. Also fx0 is reassigned the %%%%%% value from fx1, this avoids recomputing f. end if it_count>=max_iterate disp( The number of iterates calculated exceeded ) disp( max_iterate. An accurate root was not ) disp( calculated. ) %%%%%%% This displays an error message if the error bound is not reached %%%%%%% in the maximum allowed iterations. else format long root = x1 %%%%%%%% Displays the root. end format long g Table=Table(:, :) %%%%%%%% Displays the table. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function value= f(x) value=3-1./x; 21

22 %%%% Defines the function f to be used. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function actualroot=actualroot actualroot=1./3; %%%%% Defines the actual root, these two lines can be deleted or commented out, if %%%%% the actual root is not known. 22

23 function A=bisection(a,b,error_bound,max_it) %%%%%%%%%%% The variables a and b describe an interval [a, b] in which f(x) %%%%%%%%%%% has a root. The function f(x) is refered to as myfun(x) and %%%%%%%%%%% is defined below and you can change it for the problem you are %%%%%%%%%%% working on. The variable error_bound is the error tolerance you %%%%%%%%%%% wish to specify for your root, and max_it is the maximum number %%%%%%%%%%% of times the bisection method will run. if (sign(myfun2(b))*sign(myfun2(a))>0) error( The sign of f(a) and f(b) are the same ); end %%%%%%%%%% The returns an error if f(a) and f(b) are not opposite signs. A(1,1)=0; A(1,2)=a; A(1,3)=b; %%%%%%%%%% This fills in the first three columns of row 1 of the matrix A %%%%%%%%%% with 0, a0, and b0. c=(a+b)./2; A(1,4)=c; A(1,5)=myfun2(c); %%%%%%%%%% This computed the c0 and inserts c0 and f(c0) er=b-c; A(1,6)=er; %%%%%%%%% This computes the error bound as the difference of b0 and c0 and %%%%%%%%% puts this value in the 6th column of the first row of the matrix %%%%%%%%% A. iter=1; %%%%%%%%% This initializes our counter for our loop. while (er>error_bound && iter<=max_it) if sign(myfun2(b))*sign(myfun2(c))<=0 a=c; else b=c; end %%%%%%%% This if loop assigns the new a and b c=(a+b)/2; %%%%%%%% Assigns the new c value er=b-c; %%%%%%%% Assigns the new error A(iter+1,1)=iter; A(iter+1,2)=a; A(iter+1,3)=b; A(iter+1,4)=c; A(iter+1,5)=myfun2(c); A(iter+1,6)=er; 23

24 iter=iter+1; end if (iter>=max_it), disp( Maximum iterations reached at or before desired accuracy ); end %%%%%%%%%%%%%function to find root of%%%%%%%%%%%%%%%%% function value= myfun2(x) %value=x.^4-x-1; value=tan(x)-x; 24

25 Ex Problem 1 1(a) Use the Secant method with an initial guess of x 0 = 0 and x 1 = 2 to find the root of f(x) = x 3 x 2 x 1, using an error tolerance of For this problem I applied the Secant Method starting with initial iterates x 0 = 0 and x 1 = 2 (I know a root of f(x) = x 3 x 2 x 1 is between these two numbers). The results to the desired accuracy of 10 6 are: f(x) = x 3 x 2 x 1 n x n error estimate e e-10 1(d) Use the Secant method with an initial guess of x 0 = 0 and x 1 = 2 to find the root of f(x) = x e x 1, using an error tolerance of For this problem I applied the Secant Method starting with initial iterates x 0 = 0 and x 1 = 1 (I know an root of f(x) = x e x 1 is between these two numbers). The results to the desired accuracy of 10 6 are: f(x) = x e x 1 n x n error estimate e e-08 25

26 Ex Problem 2 Perform Newton s Method on the function f(x)=3-1/x, to find the root x = 1/3. Use an initial guess of p 0 =.2. Carry out five iterations and include a table with the following columns: n x n e n e n /e n 1 e n /e 2 n 1 e n /e 3 n 1 n x n e n e n /e n 1 e n /e 2 n 1 e n /e 3 n e e e What can you say about the order of convergence of Newtons Method in this case based on this table? By looking at this table we can conclude that the order of convergence is 2 and that the asymptotic rate constant is approximately 3. Ex Problem 3 Write a Matlab code that implements the Secant method for the function f(x) = x 4 (5.4) x 3 + (10.56) x 2 (8.954) x + (2.7951), find the root α located in the interval [1, 1.2], to an accuracy of at least 10 6, and display all iterates. Once you have your table of iterates and error estimates, discuss in complete sentences the convergence of the algorithm and if it agrees with what we discussed in class and why. For this problem we perform the Secant method for the function f(x) = x 4 (5.4) x 3 + (10.56) x 2 (8.954) x + (2.7951). Starting with initial iterates of [1, 1.2] it takes 29 iterates to reach the desired level of accuracy. The reason that the Secant method takes so long to converge is that the root is that the root is of multiplicity 3. In fact f(x) = (x 1.1) 3 (x 2.1) 26

27 n x n error estimate e e e e e e e e e e e e e e e e-07 Ex Problem 4 Bradie, p. 125, #14: The Secant Method appears to be converging with order α 1.6. The lack of accuracy reflects the fact that the root has higher multiplicity. ============================================================= n p(n) e(n) ln e(n)/ln e(n-1) =============================================================

28 ============================================================= The Secant Method appears to be converging to the root. The data on the order of convergence are not really conclusive. ============================================================= n p(n) e(n) ln e(n)/ln e(n-1) ============================================================= ============================================================= The Secant Method appears to be converging toward the root with superlinear order α 1.6. Again, the data are not really conclusive. ============================================================= n p(n) e(n) ln e(n)/ln e(n-1) ============================================================= ============================================================= Ex Problem 5 Bradie, p. 125, #15 Choosing initial iterates p 0 = 0.3 and p 1 = 1.2, the Secant Method appears to converge to the root, but the apparent order is about α 1. This problem is very sensitive to initial guesses. ============================================================ n p(n) e(n) ln e(n)/ln e(n-1) ============================================================

29 ============================================================ The Secant Method converges to the root. The order of convergence cannot be determined from this data. ============================================================= n p(n) e(n) ln e(n)/ln e(n-1) ============================================================= ============================================================= Convergence to the root initially is superlinear, although we cannot really estimate the order. Later, the order becomes roughly linear. ============================================================= n p(n) e(n) ln e(n)/ln e(n-1) ============================================================= ============================================================= 29

30 Ex Problem 6 Consider the function f(x) = 2 cos(x) 2e x + 1 on the interval [ 2, 2]. When you plot the function, you will see that it has two roots on this interval. (a) Write down an explicit fixed point method for finding each root, including a reasonable starting point. The iteration function may be the same for both roots. The methods should have linear-order convergence. A reasonable choice for the negative root is g (x) = x 0.2f(x) with starting point p 0 = 0.5. A reasonable choice for the positive root is g + (x) = x + 0.5f(x) with starting point p 0 = 2. (b) Use the theorem on page 87 to prove that both methods converge for the starting points you specify. The function g increases on [ 0.5, 0.2] and maps the endpoints to 0.39 and On the same interval, the derivative increases from 0.15 to The theorem applies. The function g + is increasing on [1.8, 2] and maps the endpoints to 1.91 and On the same interval, the derivative increases from 0.19 to The theorem applies. (c) Write a program that uses this fixed point method to find the positive root of f. Use the stopping condition on page 92, with a tolerance of function p = FixPoint( g, p1, tol ); p(1) = p1; p(2) = g(p1); n = 2; errorest = Inf; while (errorest > tol ) n = n + 1; p(n) = g( p(n-1) ); end dg = (p(n) - p(n-1))/(p(n-1) - p(n-2)); errorest = abs( dg*(p(n) - p(n-1)) / ( dg - 1 ) ); 30

31 (d) Verify from the numerical output of your algorithm that the method converges linearly, and estimate the asymptotic error constant. The following formulae are relevant. α log e n log e n 1 and λ e n e n 1. ============================================================================== n p(n) e(n) alpha lambda ============================================================================== ============================================================================== (e) Write a second program that uses Aitken s 2 method from Section 2.6 to accelerate your fixed-point iteration. Apply the stopping criterion from page 92 to the sequence p n with tolerance function phat = AitkenDelta( g, p1, tol ) p(1) = p1; p(2) = g( p1 ); n = 2; errorest = Inf; while (errorest > tol) n = n + 1; p(n) = g( p(n-1) ); phat(n) = p(n) - (p(n) - p(n-1))^2 / (p(n) - 2*p(n-1) + p(n-2)); if (n >= 5) 31

32 end dg = (phat(n) - phat(n-1)) / (phat(n-1) - phat(n-2)); errorest = abs( dg*(phat(n) - phat(n-1)) / ( dg - 1 ) ); end (f) Use the numerical output of your second program to estimate the order of convergence and asymptotic rate constant of the output. How do they compare with the non-accelerated method? ============================================================================= n phat(n) e(n) alpha lambda ============================================================================= ============================================================================= Ex Problem 7 Problem 5 from section 2.6: The sequence listed below was obtained from fixed point iteration applied to the function g(x) = 10/(2 + x), which has a unique fixed point (a) Apply Aitkens 2 -method to the given sequence. (You may do this by hand or writing a small script that you can reuse). Give all of the p k s. 32

33 Using the first three terms in the fixed point iteration sequence, we calculate p 3 = p 3 (p 3 p 2 ) 2 p 3 + p 1 2p 2 ( )2 = ( ) = Next, using p 2, p 3 and p 4, we calculate p 4 = p 4 (p 4 p 3 ) 2 p 4 + p 2 2p 3 ( ) 2 = ( ) = Continuing in this manner, we find p 5 = , p 6 = , and p 7 =

34 (b) To ten digits, the fixed point of g is x = Use this to show that both the original sequence and the output from Aitkens 2 -method are linearly convergent and estimate the corresponding asymptotic error constant. By how much has Aitken s 2 -method reduced the asymptotic error constant? The values in columns 3 and 5 in the table below confirm that both the original Fixed Point sequence and the accelerated sequence converge linearly. The asymptotic error constant for the Fixed Point sequence is roughly 0.23, while the asymptotic error constant for the accelerated sequence is roughly Aitken s 2 -methodhas reduced the error constant by nearly 80%. Fixed Point Aitken s 2. Fixed Point Aitken s Λ 2 n p n e n / e n 1 p n e n / en Ex Problem 8 Problem 8 from section 2.6: (a) Perform ten iterations to approximate the fixed point of g(x) = ln(4 + x x 2 ) using p 0 = 2. Verify numerically that the sequence converges linearly and estimate the asymptotic error constant. To 20 digits, the fixed point is x = Let g(x) = ln(4 + x x 2 ) and take p 0 = 2. The sequence generated by fixed point iteration is given in the second column below. The values in the third column confirm the linear convergence of the sequence with an asymptotic error constant of roughly n p n e n / e n

35 (b) Accelerate the convergence of the sequence obtained in part (a) using Aitken s 2 method. By how much has Aitken s 2 -method reduced the asymptotic error constant? Applying Aitkens 2 -method to the sequence obtained in part (a) produces the sequence listed in the second column in the table below. The values in the third column confirm the linear convergence of the sequence with an asymptotic error constant of roughly 0.19, more than 50% lower than the error constant for the original sequence. n p n e n / e n (c) Apply Steffensens method to g(x) = ln(4 + x x 2 ) using the same starting approximation specified in part (a). Perform four iterations, and verify that convergence is quadratic. Let g(x) = ln(4 + x x 2 ) and take p 0 = 2. Steffensens method produces the sequence given in the second column of the table below. The values in the third column confirm quadratic convergence of the sequence. n p n e n / e n

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center

More information

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy, Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University Newton s Iterative s Peter W. White white@tarleton.edu Department of Mathematics Tarleton State University Fall 2018 / Numerical Analysis Overview Newton s Iterative s Newton s Iterative s Newton s Iterative

More information

Accelerating Convergence

Accelerating Convergence Accelerating Convergence MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Motivation We have seen that most fixed-point methods for root finding converge only linearly

More information

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations. MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of f(x) = 0, their advantages and disadvantages. Convergence

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

Numerical Methods Lecture 3

Numerical Methods Lecture 3 Numerical Methods Lecture 3 Nonlinear Equations by Pavel Ludvík Introduction Definition (Root or zero of a function) A root (or a zero) of a function f is a solution of an equation f (x) = 0. We learn

More information

Homework 1 Math 471, Fall 2008

Homework 1 Math 471, Fall 2008 Homework Math 47, Fall 2008 Section 002 This document is intended to show what a good solution to the first homework assignment might look like. Note in particular the following points of style: There

More information

Optimization and Calculus

Optimization and Calculus Optimization and Calculus To begin, there is a close relationship between finding the roots to a function and optimizing a function. In the former case, we solve for x. In the latter, we solve: g(x) =

More information

Homework 2 - Solutions MA/CS 375, Fall 2005

Homework 2 - Solutions MA/CS 375, Fall 2005 Homework 2 - Solutions MA/CS 375, Fall 2005 1. Use the bisection method, Newton s method, and the Matlab R function fzero to compute a positive real number x satisfying: sinh x = cos x. For each of the

More information

R x n. 2 R We simplify this algebraically, obtaining 2x n x n 1 x n x n

R x n. 2 R We simplify this algebraically, obtaining 2x n x n 1 x n x n Math 42 Homework 4. page 3, #9 This is a modification of the bisection method. Write a MATLAB function similar to bisect.m. Here, given the points P a a,f a and P b b,f b with f a f b,we compute the point

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

Chapter 2 Solutions of Equations of One Variable

Chapter 2 Solutions of Equations of One Variable Chapter 2 Solutions of Equations of One Variable 2.1 Bisection Method In this chapter we consider one of the most basic problems of numerical approximation, the root-finding problem. This process involves

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: April 16, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Figure 1: Graph of y = x cos(x)

Figure 1: Graph of y = x cos(x) Chapter The Solution of Nonlinear Equations f(x) = 0 In this chapter we will study methods for find the solutions of functions of single variables, ie values of x such that f(x) = 0 For example, f(x) =

More information

Investigating Limits in MATLAB

Investigating Limits in MATLAB MTH229 Investigating Limits in MATLAB Project 5 Exercises NAME: SECTION: INSTRUCTOR: Exercise 1: Use the graphical approach to find the following right limit of f(x) = x x, x > 0 lim x 0 + xx What is the

More information

University of Delaware Department of Mathematical Sciences Math 353 Engineering Mathematics III 06S C. Bacuta

University of Delaware Department of Mathematical Sciences Math 353 Engineering Mathematics III 06S C. Bacuta University of Delaware Department of Mathematical Sciences Math 353 Engineering Mathematics III 06S C. Bacuta Homework 4: Due Friday, 0/4/06, 10:00am Part I) (1) Section., Problems #3(a,d), 9, 11. () Apply

More information

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS Math 473: Practice Problems for Test 1, Fall 011, SOLUTIONS Show your work: 1. (a) Compute the Taylor polynomials P n (x) for f(x) = sin x and x 0 = 0. Solution: Compute f(x) = sin x, f (x) = cos x, f

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

Math Numerical Analysis Mid-Term Test Solutions

Math Numerical Analysis Mid-Term Test Solutions Math 400 - Numerical Analysis Mid-Term Test Solutions. Short Answers (a) A sufficient and necessary condition for the bisection method to find a root of f(x) on the interval [a,b] is f(a)f(b) < 0 or f(a)

More information

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution 1 Introduction Lecture 7 Root finding I For our present purposes, root finding is the process of finding a real value of x which solves the equation f (x)=0. Since the equation g x =h x can be rewritten

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

CHAPTER 10 Zeros of Functions

CHAPTER 10 Zeros of Functions CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems

More information

Section 4.2: The Mean Value Theorem

Section 4.2: The Mean Value Theorem Section 4.2: The Mean Value Theorem Before we continue with the problem of describing graphs using calculus we shall briefly pause to examine some interesting applications of the derivative. In previous

More information

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now

More information

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0 8.7 Taylor s Inequality Math 00 Section 005 Calculus II Name: ANSWER KEY Taylor s Inequality: If f (n+) is continuous and f (n+) < M between the center a and some point x, then f(x) T n (x) M x a n+ (n

More information

Math 121 Winter 2010 Review Sheet

Math 121 Winter 2010 Review Sheet Math 121 Winter 2010 Review Sheet March 14, 2010 This review sheet contains a number of problems covering the material that we went over after the third midterm exam. These problems (in conjunction with

More information

Exam 3 MATH Calculus I

Exam 3 MATH Calculus I Trinity College December 03, 2015 MATH 131-01 Calculus I By signing below, you attest that you have neither given nor received help of any kind on this exam. Signature: Printed Name: Instructions: Show

More information

Maximum and Minimum Values (4.2)

Maximum and Minimum Values (4.2) Math 111.01 July 17, 2003 Summer 2003 Maximum and Minimum Values (4.2) Example. Determine the points at which f(x) = sin x attains its maximum and minimum. Solution: sin x attains the value 1 whenever

More information

Practical Numerical Analysis: Sheet 3 Solutions

Practical Numerical Analysis: Sheet 3 Solutions Practical Numerical Analysis: Sheet 3 Solutions 1. We need to compute the roots of the function defined by f(x) = sin(x) + sin(x 2 ) on the interval [0, 3] using different numerical methods. First we consider

More information

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004 Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question

More information

Lecture 8. Root finding II

Lecture 8. Root finding II 1 Introduction Lecture 8 Root finding II In the previous lecture we considered the bisection root-bracketing algorithm. It requires only that the function be continuous and that we have a root bracketed

More information

Intro to Scientific Computing: How long does it take to find a needle in a haystack?

Intro to Scientific Computing: How long does it take to find a needle in a haystack? Intro to Scientific Computing: How long does it take to find a needle in a haystack? Dr. David M. Goulet Intro Binary Sorting Suppose that you have a detector that can tell you if a needle is in a haystack,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

MAE 107 Homework 8 Solutions

MAE 107 Homework 8 Solutions MAE 107 Homework 8 Solutions 1. Newton s method to solve 3exp( x) = 2x starting at x 0 = 11. With chosen f(x), indicate x n, f(x n ), and f (x n ) at each step stopping at the first n such that f(x n )

More information

If you need the source code from the sample solutions, copy and paste from the PDF should work. If you re having trouble, send me an .

If you need the source code from the sample solutions, copy and paste from the PDF should work. If you re having trouble, send me an  . AM117 Sample Solutions, HW#1 Hi, I m Andreas Kloeckner, your grader for this course. Feel free to email me at kloeckner@dam.brown.edu. The code for the sample solutions is written so that it will run in

More information

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Math 471. Numerical methods Root-finding algorithms for nonlinear equations Math 471. Numerical methods Root-finding algorithms for nonlinear equations overlap Section.1.5 of Bradie Our goal in this chapter is to find the root(s) for f(x) = 0..1 Bisection Method Intermediate value

More information

Math 551 Homework Assignment 3 Page 1 of 6

Math 551 Homework Assignment 3 Page 1 of 6 Math 551 Homework Assignment 3 Page 1 of 6 Name and section: ID number: E-mail: 1. Consider Newton s method for finding + α with α > 0 by finding the positive root of f(x) = x 2 α = 0. Assuming that x

More information

Math 141: Section 4.1 Extreme Values of Functions - Notes

Math 141: Section 4.1 Extreme Values of Functions - Notes Math 141: Section 4.1 Extreme Values of Functions - Notes Definition: Let f be a function with domain D. Thenf has an absolute (global) maximum value on D at a point c if f(x) apple f(c) for all x in D

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR) NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR) Autumn Session UNIT 1 Numerical analysis is the study of algorithms that uses, creates and implements algorithms for obtaining numerical solutions to problems

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

FIXED POINT ITERATION

FIXED POINT ITERATION FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Zeros of Functions. Chapter 10

Zeros of Functions. Chapter 10 Chapter 10 Zeros of Functions An important part of the mathematics syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of

More information

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0. 3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)

More information

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from Lecture 44 Solution of Non-Linear Equations Regula-Falsi Method Method of iteration Newton - Raphson Method Muller s Method Graeffe s Root Squaring Method Newton -Raphson Method An approximation to the

More information

Root Finding Convergence Analysis

Root Finding Convergence Analysis Root Finding Convergence Analysis Justin Ross & Matthew Kwitowski November 5, 2012 There are many different ways to calculate the root of a function. Some methods are direct and can be done by simply solving

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Induction, sequences, limits and continuity

Induction, sequences, limits and continuity Induction, sequences, limits and continuity Material covered: eclass notes on induction, Chapter 11, Section 1 and Chapter 2, Sections 2.2-2.5 Induction Principle of mathematical induction: Let P(n) be

More information

MATH 2053 Calculus I Review for the Final Exam

MATH 2053 Calculus I Review for the Final Exam MATH 05 Calculus I Review for the Final Exam (x+ x) 9 x 9 1. Find the limit: lim x 0. x. Find the limit: lim x + x x (x ).. Find lim x (x 5) = L, find such that f(x) L < 0.01 whenever 0 < x

More information

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2. INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

University of Connecticut Department of Mathematics

University of Connecticut Department of Mathematics University of Connecticut Department of Mathematics Math 1131 Sample Exam 2 Fall 2015 Name: Instructor Name: Section: TA Name: Discussion Section: This sample exam is just a guide to prepare for the actual

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

CHAPTER-II ROOTS OF EQUATIONS

CHAPTER-II ROOTS OF EQUATIONS CHAPTER-II ROOTS OF EQUATIONS 2.1 Introduction The roots or zeros of equations can be simply defined as the values of x that makes f(x) =0. There are many ways to solve for roots of equations. For some

More information

Nonlinear Equations. Chapter The Bisection Method

Nonlinear Equations. Chapter The Bisection Method Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y

More information

Solving Non-Linear Equations (Root Finding)

Solving Non-Linear Equations (Root Finding) Solving Non-Linear Equations (Root Finding) Root finding Methods What are root finding methods? Methods for determining a solution of an equation. Essentially finding a root of a function, that is, a zero

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2 Bisection Method Given continuous function f (x) on the interval [a, b] with f (a) f (b) < 0, there must be a root in (a, b). To find a root: set [a 1, b 1 ] = [a, b]. set p 1 = a 1+b 1 2 and compute f

More information

Iterative Methods to Solve Systems of Nonlinear Algebraic Equations

Iterative Methods to Solve Systems of Nonlinear Algebraic Equations Western Kentucky University TopSCHOLAR Masters Theses & Specialist Projects Graduate School Spring 018 Iterative Methods to Solve Systems of Nonlinear Algebraic Equations Md Shafiful Alam Western Kentucky

More information

Homework 2. Matthew Jin. April 10, 2014

Homework 2. Matthew Jin. April 10, 2014 Homework Matthew Jin April 10, 014 1a) The relative error is given by ŷ y y, where ŷ represents the observed output value, and y represents the theoretical output value. In this case, the observed output

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

DRAFT - Math 101 Lecture Note - Dr. Said Algarni

DRAFT - Math 101 Lecture Note - Dr. Said Algarni 2 Limits 2.1 The Tangent Problems The word tangent is derived from the Latin word tangens, which means touching. A tangent line to a curve is a line that touches the curve and a secant line is a line that

More information

Chapter 3: The Derivative in Graphing and Applications

Chapter 3: The Derivative in Graphing and Applications Chapter 3: The Derivative in Graphing and Applications Summary: The main purpose of this chapter is to use the derivative as a tool to assist in the graphing of functions and for solving optimization problems.

More information

Solutions of Equations in One Variable. Newton s Method

Solutions of Equations in One Variable. Newton s Method Solutions of Equations in One Variable Newton s Method Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011 Brooks/Cole,

More information

Scilab. Prof. S. A. Katre Dept. of Mathematics, University of Pune

Scilab. Prof. S. A. Katre Dept. of Mathematics, University of Pune Scilab Prof. S. A. Katre Dept. of Mathematics, University of Pune sakatre@math.unipune.ernet.in sakatre@gmail.com sakatre@bprim.org July 25, 2009 Page 1 of 27 1. n-th roots ->%i %i = i -->sqrt(%i) 0.7071068

More information

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft) Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence

More information

Lecture 5. September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico

Lecture 5. September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico Lecture 5 September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico 1 Review: Office hours at regularly scheduled times this week Tuesday: 9:30am-11am Wed: 2:30pm-4:00pm

More information

5 Finding roots of equations

5 Finding roots of equations Lecture notes for Numerical Analysis 5 Finding roots of equations Topics:. Problem statement. Bisection Method 3. Newton s Method 4. Fixed Point Iterations 5. Systems of equations 6. Notes and further

More information

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014 Root Finding: Close Methods Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Roots Given function f(x), we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the

More information

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 5 Issue 2 February 2016 PP.0110 Numerical Study of Some Iterative Methods for Solving Nonlinear

More information

Rolle s Theorem. The theorem states that if f (a) = f (b), then there is at least one number c between a and b at which f ' (c) = 0.

Rolle s Theorem. The theorem states that if f (a) = f (b), then there is at least one number c between a and b at which f ' (c) = 0. Rolle s Theorem Rolle's Theorem guarantees that there will be at least one extreme value in the interior of a closed interval, given that certain conditions are satisfied. As with most of the theorems

More information

Math 117: Calculus & Functions II

Math 117: Calculus & Functions II Drexel University Department of Mathematics Jason Aran Math 117: Calculus & Functions II Contents 0 Calculus Review from Math 116 4 0.1 Limits............................................... 4 0.1.1 Defining

More information

Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang)

Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang) Numerical Analysis EE, NCKU Tien-Hao Chang (Darby Chang) 1 In the previous slide Error (motivation) Floating point number system difference to real number system problem of roundoff Introduced/propagated

More information

LIMITS AND DERIVATIVES

LIMITS AND DERIVATIVES 2 LIMITS AND DERIVATIVES LIMITS AND DERIVATIVES 2.2 The Limit of a Function In this section, we will learn: About limits in general and about numerical and graphical methods for computing them. THE LIMIT

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

Halley s Method: A Cubically Converging. Method of Root Approximation

Halley s Method: A Cubically Converging. Method of Root Approximation Halley s Method: A Cubically Converging Method of Root Approximation Gabriel Kramer Brittany Sypin December 3, 2011 Abstract This paper will discuss numerical ways of approximating the solutions to the

More information

Announcements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook!

Announcements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook! Announcements Topics: - sections 5.2 5.7, 6.1 (extreme values) * Read these sections and study solved examples in your textbook! Homework: - review lecture notes thoroughly - work on practice problems

More information

Learning Objectives for Math 165

Learning Objectives for Math 165 Learning Objectives for Math 165 Chapter 2 Limits Section 2.1: Average Rate of Change. State the definition of average rate of change Describe what the rate of change does and does not tell us in a given

More information

Limit. Chapter Introduction

Limit. Chapter Introduction Chapter 9 Limit Limit is the foundation of calculus that it is so useful to understand more complicating chapters of calculus. Besides, Mathematics has black hole scenarios (dividing by zero, going to

More information

MAT137 Calculus! Lecture 9

MAT137 Calculus! Lecture 9 MAT137 Calculus! Lecture 9 Today we will study: Limits at infinity. L Hôpital s Rule. Mean Value Theorem. (11.5,11.6, 4.1) PS3 is due this Friday June 16. Next class: Applications of the Mean Value Theorem.

More information

Math 128A: Homework 2 Solutions

Math 128A: Homework 2 Solutions Math 128A: Homework 2 Solutions Due: June 28 1. In problems where high precision is not needed, the IEEE standard provides a specification for single precision numbers, which occupy 32 bits of storage.

More information

Math 180, Exam 2, Practice Fall 2009 Problem 1 Solution. f(x) = arcsin(2x + 1) = sin 1 (3x + 1), lnx

Math 180, Exam 2, Practice Fall 2009 Problem 1 Solution. f(x) = arcsin(2x + 1) = sin 1 (3x + 1), lnx Math 80, Exam, Practice Fall 009 Problem Solution. Differentiate the functions: (do not simplify) f(x) = x ln(x + ), f(x) = xe x f(x) = arcsin(x + ) = sin (3x + ), f(x) = e3x lnx Solution: For the first

More information

14 Increasing and decreasing functions

14 Increasing and decreasing functions 14 Increasing and decreasing functions 14.1 Sketching derivatives READING Read Section 3.2 of Rogawski Reading Recall, f (a) is the gradient of the tangent line of f(x) at x = a. We can use this fact to

More information

Consequences of Continuity and Differentiability

Consequences of Continuity and Differentiability Consequences of Continuity and Differentiability We have seen how continuity of functions is an important condition for evaluating limits. It is also an important conceptual tool for guaranteeing the existence

More information

Root Finding For NonLinear Equations Bisection Method

Root Finding For NonLinear Equations Bisection Method Root Finding For NonLinear Equations Bisection Method P. Sam Johnson November 17, 2014 P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 1 / 26 Introduction The

More information

Numerical Analysis Fall. Roots: Open Methods

Numerical Analysis Fall. Roots: Open Methods Numerical Analysis 2015 Fall Roots: Open Methods Open Methods Open methods differ from bracketing methods, in that they require only a single starting value or two starting values that do not necessarily

More information

Contraction Mappings Consider the equation

Contraction Mappings Consider the equation Contraction Mappings Consider the equation x = cos x. If we plot the graphs of y = cos x and y = x, we see that they intersect at a unique point for x 0.7. This point is called a fixed point of the function

More information

X. Numerical Methods

X. Numerical Methods X. Numerical Methods. Taylor Approximation Suppose that f is a function defined in a neighborhood of a point c, and suppose that f has derivatives of all orders near c. In section 5 of chapter 9 we introduced

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

Extended Introduction to Computer Science CS1001.py. Lecture 8 part A: Finding Zeroes of Real Functions: Newton Raphson Iteration

Extended Introduction to Computer Science CS1001.py. Lecture 8 part A: Finding Zeroes of Real Functions: Newton Raphson Iteration Extended Introduction to Computer Science CS1001.py Lecture 8 part A: Finding Zeroes of Real Functions: Newton Raphson Iteration Instructors: Benny Chor, Amir Rubinstein Teaching Assistants: Yael Baran,

More information

Numerical Solution of f(x) = 0

Numerical Solution of f(x) = 0 Numerical Solution of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@pdx.edu ME 350: Finding roots of f(x) = 0 Overview Topics covered in these slides

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information