Homework No. 2 Solutions TA: Theo Drivas

Similar documents
CS 323: Numerical Analysis and Computing

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Introductory Numerical Analysis

INTRODUCTION TO NUMERICAL ANALYSIS

CS 323: Numerical Analysis and Computing

ROOT FINDING REVIEW MICHELLE FENG

5. Hand in the entire exam booklet and your computer score sheet.

Solution of Algebric & Transcendental Equations

Practical Numerical Analysis: Sheet 3 Solutions

Numerical Analysis Fall. Roots: Open Methods

Homework 2. Matthew Jin. April 10, 2014

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

Numerical solutions of nonlinear systems of equations

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Math Numerical Analysis Mid-Term Test Solutions

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Math Numerical Analysis

Section 3.5: Implicit Differentiation

Optimization and Calculus

Scientific Computing: An Introductory Survey

Midterm Review. Igor Yanovsky (Math 151A TA)

Halley s Method: A Cubically Converging. Method of Root Approximation

CHEE 222: PROCESS DYNAMICS AND NUMERICAL METHODS

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Math 551 Homework Assignment 3 Page 1 of 6

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Investigating Limits in MATLAB

MATH 350: Introduction to Computational Mathematics

Lecture 8. Root finding II

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

MATH 350: Introduction to Computational Mathematics

Homework and Computer Problems for Math*2130 (W17).

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Root finding. Root finding problem. Root finding problem. Root finding problem. Notes. Eugeniy E. Mikhailov. Lecture 05. Notes

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

SOLUTIONS to Exercises from Optimization

Polytechnic Institute of NYU MA 2132 Final Practice Answers Fall 2012

Numerical Methods in Physics and Astrophysics

Functions & Graphs. Section 1.2

A Few Concepts from Numerical Analysis

EXAMPLE OF ONE-STEP METHOD

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Numerical Methods

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Numerical Methods in Physics and Astrophysics

Numerical Methods Lecture 3

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Chapter 3: Root Finding. September 26, 2005

Math 128A: Homework 2 Solutions

1. Write a program based on computing divided differences one diagonal at a time as shown below: f x 1,x 2

FIXED POINT ITERATION

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

PRACTICE PROBLEMS FOR MIDTERM I

Computational Methods. Solving Equations

Solutions of Equations in One Variable. Newton s Method

Find all of the real numbers x that satisfy the algebraic equation:

Newton's forward interpolation formula

0.1 Problems to solve

10.3 Steepest Descent Techniques

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from

Numerical Solution of f(x) = 0

CHM320: MATH REVIEW. I. Ordinary Derivative:

Root finding. Eugeniy E. Mikhailov. Lecture 05. The College of William & Mary. Eugeniy Mikhailov (W&M) Practical Computing Lecture 05 1 / 10

CHAPTER 4 ROOTS OF EQUATIONS

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

OWELL WEEKLY JOURNAL

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Numerical Methods in Informatics

MS 2001: Test 1 B Solutions

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Math 180, Exam 2, Practice Fall 2009 Problem 1 Solution. f(x) = arcsin(2x + 1) = sin 1 (3x + 1), lnx

Solving nonlinear equations

2.4 - Convergence of the Newton Method and Modified Newton Method

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Problem Total Points Score

Zeroes of Transcendental and Polynomial Equations. Bisection method, Regula-falsi method and Newton-Raphson method

Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers

Lecture 39: Root Finding via Newton s Method

Numerical Methods 5633

Math 180, Final Exam, Fall 2012 Problem 1 Solution

1 The best of all possible worlds

Outline. Additional Nonlinear Systems. Abstract. Finding Equilibrium Points Numerically. Newton s Method

Order of convergence

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Root Finding Convergence Analysis

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

Numerical Methods 5633

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m.

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

ASSIGNMENT BOOKLET. Numerical Analysis (MTE-10) (Valid from 1 st July, 2011 to 31 st March, 2012)

SYSTEMS OF ODES. mg sin ( (x)) dx 2 =

Applied Computational Economics Workshop. Part 3: Nonlinear Equations

14 Lecture 14 Local Extrema of Function

Root Finding (and Optimisation)

Transcription:

550.386 Homework No. Solutions TA: Theo Drivas Problem 1. (a) Consider the initial-value problems for the following two differential equations: (i) dy/dt = y, y(0) = y 0, (ii) dy/dt = y, y(0) = y 0. Use separation of variables (or any other technique) to solve these two problems exactly as explicit functions y(t; y 0 ) of time t and initial condition y 0. Are these problems well-posed? Explain your answer. (b) For each of the two problems in (a), calculate both the absolute condition number and the relative condition number Abs.cond(y(t; y 0 )) := y(t; y 0) y 0 Rel.cond(y(t; y 0 )) := y 0 y(t; y 0 ) y(t; y 0 ) y 0 For the specific time t = 100, is each problem absolutely well-conditioned? Relatively well-conditioned? Explain your answers. (c) Later in the course we shall study the Euler method to approximately solve the problem (ii) in part (a): ŷ(t n+1 ) ŷ(t n ) h = ŷ(t n ), t n = nh, n = 0, 1,,...; ŷ(0) = y 0, which gives the approximation ŷ(t n ) y(t n ) by solving a simple first-order linear difference equation. Find the explicit expression for the approximate solution ŷ(t n ; y 0, h) as a function of t n, y 0, and h. Also calculate the absolute and relative condition numbers of the approximation, using similar definitions as in part (b). For what values of h is the absolute condition number always less than 1? (Sol a) The general solution to ẏ = ±y is y(t; y 0 ) = y 0 e ±t These problems are well posed because there (i) is a solution, (ii) it is unique and (iii) it depends continuously on the data (here, y 0 ). (Sol b) The absolute conditions numbers are Abs.cond(y(t; y 0 )) := y(t; y 0) y 0 = { e t e t for ẏ = y for ẏ = y The relative conditions numbers are identical for both problems because y(t;y0) y 0 Rel.cond(y(t; y 0 )) := y 0 y(t; y 0 ) = 1 y(t; y 0 ) y 0 = y(t;y0) y 0 : 1

Clearly both problems are relatively well-conditioned since both relative condition numbers are unity for all time. Because { e 100 for ẏ = y Abs.cond(y(100; y 0 )) = e 100 0 for ẏ = y! the first problem is not absolutely well-conditioned at time t = 100, but the second is absolutely well-conditioned at t = 100. (Sol c) For t n = nh, n = 0, 1,,... ŷ(t n+1 ) ŷ(t n ) = ŷ(t n ), h = ŷ(t n ) = (1 h) tn/h y 0 The condition numbers are: Abs.cond(ŷ(t n ; y 0 )) := ŷ(t n; y 0 ) y 0 Rel.cond(ŷ(t n ; y 0 ))) := For h satisfying 1 < (1 h) < 1, i.e. necessarily less than 1. = (1 h) tn/h y 0 ŷ(t n ; y 0 ) = 1 ŷ(t n ; y 0 ) y 0 Problem. (a) Use bisection method to find the roots of 0 < h <, the absolute condition number is f(x) = sin(x) ln(x 4 + 1/) x with tolerance T OL = 10 15. Make certain that you find all of the roots of the given function. Record the final endpoints of the bracketing interval, the achieved tolerance, the number of iterations, and the computational time for each root. (You may use the prepared matlab script bisect.m, if you wish.) (b) Repeat part (a) using instead the Newton-Fourier method. This method combines the Newton iteration for x n with another iteration equation y n+1 = y n f(y n) f (x n ). Assuming that f is increasing and convex on the interval [x 0, y 0 ], i.e. that f (x) > 0, f (x) > 0, x [x 0, y 0 ] ( ) and that x 0, y 0 bracket a root, or f(x 0 ) < 0 and f(y 0 ) > 0, then the iterates [x n, y n ] converge quadratically and always bracket the root. Write your own matlab script newtfour.m to implement this algorithm, e.g. by modifying newton.m. (c) If the condition (*) is not satisfied, but f and f do not change sign on an interval around the root, then one of the following modified functions f (x) = f(x), f 3 (x) = f( x), f 4 (x) = f( x) will satisfy (*). Use the Newton-Fourier method to find all of the roots of the function in part (a) to tolerance T OL = 10 15. For each root use the same initial interval [x 0, y 0 ] as you did for the bisection method in part (a). Note that the function may not satisfy the condition (*) for each of the roots and, in that case, you will need to use one of the transformed functions discussed above

(d) Compare the results of the bisection and Newton-Fourier methods (final endpoints of the bracketing interval, achieved tolerance, number of iterations, and computational time). Which method is preferable and why? (Sol a) 1 >> f=inline('sin(x)-log(x.ˆ4+1/)-x') >> tol=10ˆ(-15) 3 4 >> figure; fplot(@(x) [f(x), 0*x], [-10,10]) 5 >> pause 6 7 >> format long 8 9 >> % BISECTION 10 11 >> a=0;b=1; 1 >> bisect 13 14 k = 50 15 ans = 0.8047087931477 0.8047087931478 0.000000000000001 16 Elapsed time is 0.099088 seconds. 17 >> pause 18 19 >> a=-1;b=0; 0 >> bisect 1 k = 50 3 ans = -0.88670835485078-0.88670835485077 0.000000000000001 4 Elapsed time is 0.19877 seconds. 5 6 >> pause 7 8 >> a=-10;b=-9; 9 >> bisect 30 31 k = 47 3 ans = -9.134515916915710-9.134515916915703 0.000000000000007 33 Elapsed time is 0.078596 seconds. (Sol b) Here is a version of newtfour.m: 1 tic 3 itmax=100; 4 5 if sign(f(x))==sign(f(y)) 6 'failure to bracket root' 7 return 8 end 9 10 k=0 11 [x y y-x] 1 13 while abs(x-y)>tol*max(abs(y),1.0) 14 if k+1>itmax 15 break 16 end 17 xold=x; 18 yold=y; 19 k=k+1 3

Figure 1: The function f(x) = sin(x) ln(x 4 + 1/) x. 0 DF=Df(y); 1 x=x-f(x)/df; y=y-f(y)/df; 3 [x y y-x] 4 end 5 6 toc (Sol c) We now apply this algorithm: 1 % FOURIER-NEWTON 3 >> fold=f; 4 >> figure; fplot(@(x) [fold(x), 0*x], [0,1]) 5 >> pause 6 >> % f(x)=-f(x) 7 >> f=inline('-(sin(x)-log(x.ˆ4+1/)-x)') 8 >> figure; fplot(@(x) [f(x), 0*x], [0,1]) 9 >> Df=inline('-cos(x)+4*x.ˆ3./(x.ˆ4+1/)+1') 10 11 >> x=0;y=1; 1 >> newtfour 13 14 k = 7 15 ans = 0.8047087931477 0.8047087931477 0 16 Elapsed time is 0.03609 seconds. 17 18 >> pause 19 0 >> figure; fplot(@(x) [fold(x), 0*x], [-1,0]) 1 >> pause >> % f(x)=-f(-x) 3 >> f=inline('sin(x)+log(x.ˆ4+1/)-x') 4 >> figure; fplot(@(x) [f(x), 0*x], [0,1]) 5 >> Df=inline('cos(x)+4*x.ˆ3./(x.ˆ4+1/)-1') 6 7 >> x=0;y=1; 8 >> newtfour 9 30 k = 7 31 ans = 0.88670835485078 0.88670835485078 0 4

3 Elapsed time is 0.01574 seconds. 33 34 35 >> pause 36 37 >> figure; fplot(@(x) [fold(x), 0*x], [-10,-9]) 38 >> pause 39 >> % f(x)=f(-x) 40 >> f=inline('-sin(x)-log(x.ˆ4+1/)+x') 41 >> figure; fplot(@(x) [f(x), 0*x], [9,10]) 4 >> Df=inline('-cos(x)-4*x.ˆ3/(x.ˆ4+1/)+1') 43 44 >> x=9;y=10; 45 >> newtfour 46 47 k = 4 48 ans = 9.134515916915706 9.134515916915708 0.00000000000000 49 Elapsed time is 0.008470 seconds. Figure : The function f f(x) = sin(x) + ln(x 4 + 1/) + x (left) and the derivative f f (x) = cos(x) + + 1 (right) on the interval x [0, 1]. 4x3 x 4 +1/ Similar plots to these are generated by the code for the corresponding other modified functions. (Sol d) Newton-Fourier method is preferable since it requires less computational time, less number of iterations, while achieves approximately the same bracketing interval and tolerance, than that of bisection method. 5

Problem 3. a) Use Newton s method to find the smallest positive root of f(x) = sin(x 3 ) x/, again with tolerance T OL = 10 15. Record the final approximate root, the achieved tolerance, the number of iterations, and the computational time. (b) Repeat part (a) using the secant method. Use the same initial x 0 as you did for Newton s method in (a) and make one Newton iteration to generate x 1. (c) Repeat part (a) using the method of inverse quadratic interpolation (IQI). Use the same initial x 0, x 1 as you did for the secant method in (b), and make one secant iteration to generate x. (d) Compare the results of the three methods (final endpoints of the bracketing interval, achieved tolerance, number of iterations, and computational time). Which method is preferable and why? (Sol a) Figure 3: The function f(x) = sin(x 3 ) x/. 1 Ntry=1000; >> f=inline('sin(x.ˆ3)-x/') 3 >> Df=inline('3*x.ˆ.*cos(x.ˆ3)-1/') 4 >> figure; fplot(@(x) [f(x), 0*x], [0,1]) 5 >> pause 6 7 >> timen=0; 8 >> for nt=1:ntry 9 >> x=1; 10 >> tic 11 >> newton 1 >> timen=timen+toc; 13 >> end 14 >> timen=timen/ntry; 15 >> k=k 16 >> [x abs(x-xold)] 17 >> disp(['elapsed time is ', numstr(timen), ' seconds']) 18 6

19 k = 6 0 ans = 0.71506383438961 0.000000000000000 1 Elapsed time is 0.00555 seconds (Sol b) 1 >> times=0; >> for nt=1:ntry 3 >> a=1; 4 >> clear b 5 >> tic 6 >> secant 7 >> times=times+toc; 8 >> end 9 >> times=times/ntry; 10 >> k=k 11 >> [b abs(b-a)] 1 >> disp(['elapsed time is ', numstr(times), ' seconds']) 13 14 k = 7 15 ans = 0.71506383438961 0.000000000000001 16 Elapsed time is 0.00891 seconds (Sol c) 1 >> timei=0; >> for nt=1:ntry 3 >> a=1; 4 >> clear b 5 >> clear c 6 >> tic 7 >> iquadi 8 >> timei=timei+toc; 9 >> end 10 >> timei=timei/ntry; 11 >> k=k 1 >> [c abs(c-b)] 13 >> disp(['elapsed time is ', numstr(timen), ' seconds']) 14 15 k = 7 16 ans = 0.71506383438961 0.000000000000000 17 Elapsed time is 0.008756 seconds (Sol d) The Newtons method is preferable since it requires less computational time than those of IQL and secant methods. Note: These 3 methods give roughly the same final approximate root and achieved tolerance. The number of iterations in Newtons method is the least, but every iteration in Newtons method requires the computer to evaluate f (x). This can cause this method to be slower that secant or IQl at each iteration for some problems since they do not need to do this function evaluation. 7

Problem 4. Derive the error formula for the secant method x x n+1 = (x x n )(x x n 1 ) f[x n 1, x n, x ] f[x n 1, x n ] in terms of the divided-differences. By direct computation: (x n x n 1 ) x x n+1 = x x n + f(x n ) f(x n ) f(x n 1 ) ( = (f(x n ) f(x n 1 )) (x ) x n ) (x n x n 1 ) f(x (xn x n 1 ) n) f(x n ) f(x n 1 ) ( = (x x n ) (f(x n) f(x n 1 )) f(x ) n) (xn x n 1 ) (x n x n 1 ) (x x n ) f(x n ) f(x n 1 ) ( = (x x n ) (f(x n) f(x n 1 )) + f(x ) ) f(x n ) (xn x n 1 ) (x n x n 1 ) (x x n ) f(x n ) f(x n 1 ) where we have made use of the fact that f(x ) = 0. Continuing on, we have x x n+1 = (x x n ) ( f[x n 1, x n ] + f[x n, x ]) /f[x n 1, x n ] ( ) f[xn, x ] f[x n 1, x n ] = (x x n )(x x n 1 ) /f[x n 1, x n ] (x x n 1 ) = (x x n )(x x n 1 ) f[x n 1, x n, x ]. f[x n 1, x n ] Problem 5. Numerically calculate the Newton iterates for solving x 1 = 0, and use x 0 = 5. Identify and explain the resulting speed of convergence. (Sol) 1 >> f=inline('xˆ-1'); >> Df=inline('*x'); 3 >> tol=eps; 4 5 >> MM=5; 6 >> x=ˆmm; 7 8 >> k=0; 9 >> xold=0; 10 >> itmax=50; 11 >> xx(k+1)=x; 1 >> kk(k+1)=k; 13 14 >> while abs(x-xold)>tol*max(abs(x),1.0) 15 >> if k+1>itmax 16 >> break 17 >> end 18 >> xold=x; 19 >> k=k+1 0 >> x=x-f(x)/df(x); 1 >> xx(k+1)=x; 8

>> kk(k+1)=k; 3 >> [x abs(x-xold)] 4 >> end 5 6 k = 31 7 ans = 1 0 8 9 >> plot(kk,log(xx),'-b',kk,mm-kk,'-r',kk,0*kk,'-k') 30 >> axis([0 k MM-k MM]) 31 >> xlabel('$k$','fontsize',15,'interpreter','latex') 3 >> ylabel('$\log (x k)$','fontsize',15,'interpreter','latex') 33 >> title('newton iterates $x k$ for $f(x)=xˆ-1$ and... $x 0=ˆ{5}$','FontSize',18,'Interpreter','latex') 34 >> legend('x k','ˆ{5-k}','1') We can understand the rate of convergence as follows. We have: x = x f(x) f (x) = x x 1 = 1 ( x + 1 ) x for x 1. x x thus the rate of convergence is 1/. Problem 6. Define an iteration formula by z n+1 = x n f(x n) f (x n ) x n+1 = z n+1 f(z n+1) f (x n ) This is like two Newton steps, with the second re-using the derivative from the first. (a) Show that the order of convergence of x n to x is at least 3. Hint: Exploit the theorem proved in class on order of convergence of fixed-point methods (generalizing Theorem.9 in Burden & Faires), and let h(x) = x f(x) f (x) g(x) = h(x) f(h(x)) f (x) 9

(b) Compare this method to Newton s method in terms of the number of function and derivative evaluations. If τ f is the time required to evaluate f and τ f is the additional time to evaluate f, then how large must τ f /τ f be in order for the new iteration to converge faster than Newton? (Sol a) We first calculate the derivatives of h: h (x) = f(x)f (x) (f (x)) h (x) = f (x) f (x) + f(x) d dx [ f ] (x) (f (x)) so h (x ) = 0 and h (x ) = f (x )/f (x ). Now we calculate the derivatives of g, ( g (x) = h (x) 1 f ) (h(x)) f + f(h(x))f (x) (x) f (x) ( g (x) = h (x) 1 f ) (h(x)) f + h (x) d ( 1 f ) (h(x)) (x) dx f (x) + f (h(x))h (x) f (x) f (x) + f(h(x)) d ( f ) (x) dx f (x) So that g (x ) = 0 and g (x ) = h (x) 0+0 d ( 1 f ) (h(x)) dx f +0 f (x) f (x) (x) f (x) +0 d dx x=x ( f ) (x) f (x) = 0! Since g(x ) = x 0 but g (x ) = g (x ) = 0, we can apply the theorem proved in class (pg 6 of Ch. of the notes), to say that we have order of convergence at least 3. (Sol b) The convergence times for Newton and the 3rd order method we have here are: τ newt = (τ f + τ f ) log K log τ 3rd = (τ f + τ f ) log K log 3 Thus τ 3rd τ newt = ( τf + τ f τ f + τ f ) log log 3 < 1 iff ( ) τf + τ f τ f + τ f < log 3 log = log 3 iff τ f τ f > log 3 log 3 1 0.709511935 10

Problem 7. Given below is a table of iterates from a linearly convergent iteration x n+1 = g(x n ). Estimate (a) the rate of linear convergence, (b) the fixed point x and (c) the error x 7 x. n x n 0 1.0000000000 1 1.398157033 1.50079904 3 1.555937987 4 1.56644433 5 1.56951650 6 1.57049831 7 1.5706869770 (d) Use x 7 and the error estimate in (c) to make an improved estimate of the fixed point value x to which the sequence converges. 1 % g=inline('x+cos(x/)-sin(x/)') >> xi(1)=1; 3 >> xi()=1.398157033; 4 >> xi(3)=1.50079904; 5 >> xi(4)=1.555937987; 6 >> xi(5)=1.56644433; 7 >> xi(6)=1.56951650; 8 >> xi(7)=1.57049831; 9 >> xi(8)=1.5706869770; 10 11 >> x=xi(3:8); 1 >> xp=xi(:7); 13 >> xpp=xi(1:6); (Sol a) We estimate the rate of convergence as follows: λ n x n+1 x n x n x n 1, λ 7 0.989336666786. 1 >> lamest=(x-xp)./(xp-xpp); 3 lamest = 0.3061808435913 0.9410463638369 0.9997704311184... 0.990187784581 0.9893986498991 0.989336666786 4 5 >> lamest=lamest(6) 6 lamest = 0.989336666786 (Sol b) The naive estimate is x 7 = 1.5706869770. (Sol c) We estimated the error as: x x 7 = 1.093 10 4. 11

1 >> errest=lamest.*(x-xp)./(lamest-1); 3 errest = -0.05381377578360-0.014939930701155-0.00435406706010... -0.001747656488-0.000373344949688-0.000109349931617 4 5 >> errest=errest(6) 6 errest =-0.000109349931617 (Sol d) Exploiting our estimate of the error, we use Aitken extrapolation to obtain an improved estimate: x n = x n + = x n + λ n 1 λ n (x n x n 1 ) x n + (x n x n 1 ) (x n 1 x n ) (x n x n 1 ) x 7 1.57079636931617 x n x n 1 x n 1 x n 1 xn xn 1 x n 1 x n (x n x n 1 ) 1 >> xx=x-errest; 3 xx = 1.573893631778360 1.570877917901155 1.570798385906010... 1.57079637856488 1.57079638049688 1.57079636931617 4 5 >> fixest=xx(6) 6 fixest = 1.57079636931617 You may recognize this number x as π/ = 1.57079636794897! Clearly the Aitken extrapolate x 7 gives a much more accurate estimate (relative error 8.70 10 11 ) than does the naive guess x 7 (relative error 6.96 10 5 ). 1