FIXED POINT ITERATION

Similar documents
Chapter 3: Root Finding. September 26, 2005

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

ROOT FINDING REVIEW MICHELLE FENG

Numerical Methods. Root Finding

Accelerating Convergence

Section 21. The Metric Topology (Continued)

Math 4329: Numerical Analysis Chapter 03: Fixed Point Iteration and Ill behaving problems. Natasha S. Sharma, PhD

Numerical differentiation

Numerical solutions of nonlinear systems of equations

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

MATH 1910 Limits Numerically and Graphically Introduction to Limits does not exist DNE DOES does not Finding Limits Numerically

Simple Iteration, cont d

SYSTEMS OF NONLINEAR EQUATIONS

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

MATH 1231 MATHEMATICS 1B Calculus Section 4.4: Taylor & Power series.

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

8.5 Taylor Polynomials and Taylor Series

Chapter 11 - Sequences and Series

Geometric Series and the Ratio and Root Test

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

Numerical Methods in Physics and Astrophysics

1.7 Inequalities. Copyright Cengage Learning. All rights reserved.

CHALLENGE! (0) = 5. Construct a polynomial with the following behavior at x = 0:

Numerical Methods in Physics and Astrophysics

Two hours UNIVERSITY OF MANCHESTER. 21 January

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series.

Chapter 1. Root Finding Methods. 1.1 Bisection method

Infinite series, improper integrals, and Taylor series

Scientific Computing: An Introductory Survey

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

We consider the problem of finding a polynomial that interpolates a given set of values:

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Order of convergence

CHAPTER 10 Zeros of Functions

Section 6.3 Richardson s Extrapolation. Extrapolation (To infer or estimate by extending or projecting known information.)

Geometric Series and the Ratio and Root Test

Zeros of Functions. Chapter 10

3 Algebraic Methods. we can differentiate both sides implicitly to obtain a differential equation involving x and y:

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Math 115 Spring 11 Written Homework 10 Solutions

Scientific Computing

Numerical Methods in Informatics

Lecture 7 - Separable Equations

Preliminary check: are the terms that we are adding up go to zero or not? If not, proceed! If the terms a n are going to zero, pick another test.

Solution of Algebric & Transcendental Equations

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

INFINITE SEQUENCES AND SERIES

Graphs of Antiderivatives, Substitution Integrals

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Power series and Taylor series

Table of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner:

The Relation between the Integral and the Derivative Graphs. Unit #10 : Graphs of Antiderivatives, Substitution Integrals

MATH ASSIGNMENT 03 SOLUTIONS

Advanced Mathematics Unit 2 Limits and Continuity

Advanced Mathematics Unit 2 Limits and Continuity

Math 480 The Vector Space of Differentiable Functions

Optimization Methods. Lecture 19: Line Searches and Newton s Method

Math 1b Sequences and series summary

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Numerical Analysis: Interpolation Part 1

Section 3.7. Rolle s Theorem and the Mean Value Theorem

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Analysis II - few selective results

The First Derivative and Second Derivative Test

Math Numerical Analysis

Iterative solvers for linear equations

Families of Functions, Taylor Polynomials, l Hopital s

In general, if we start with a function f and want to reverse the differentiation process, then we are finding an antiderivative of f.

Chapter 11: Sequences; Indeterminate Forms; Improper Integrals

The First Derivative and Second Derivative Test

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Numerical Analysis Exam with Solutions

Math 230 Mock Final Exam Detailed Solution

Figure 1: Graph of y = x cos(x)

Announcements. Topics: Homework:

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

Lecture 10: Finite Differences for ODEs & Nonlinear Equations

Calculus. Central role in much of modern science Physics, especially kinematics and electrodynamics Economics, engineering, medicine, chemistry, etc.

Consequences of Continuity

MATH141: Calculus II Exam #4 review solutions 7/20/2017 Page 1

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

G12MAN Mathematical Analysis Fully-justified answers to Revision Quiz questions

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Optimization Tutorial 1. Basic Gradient Descent

Polynomial Approximations and Power Series

Solutions of Equations in One Variable. Newton s Method

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Distance from a Point to an Ellipse, an Ellipsoid, or a Hyperellipsoid

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period:

4.4 Uniform Convergence of Sequences of Functions and the Derivative

THE EIGENVALUE PROBLEM

Appendix 8 Numerical Methods for Solving Nonlinear Equations 1

MATH 1902: Mathematics for the Physical Sciences I

Transcription:

FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial guess x 0 chosen, compute a sequence in the hope that x n α. x n+1 = g(x n ), n 0 There are infinite many ways to introduce an equivalent fixed point problem for a given equation.

Example We begin with an example. Consider solving the two equations E1: x = 1 +.5 sin x E2: x = 3 + 2 sin x y y y = 3 + 2sin x y = 1 +.5sin x y = x y = x α x α x

The solutions are E1: x = 1 +.5 sin x E2: x = 3 + 2 sin x E1: α = 1.49870113351785 E2: α = 3.09438341304928 We are going to use a numerical scheme called fixed point iteration. It amounts to making an initial guess of x 0 and substituting this into the right side of the equation. The resulting value is denoted by x 1 ; and then the process is repeated, this time substituting x 1 into the right side. This is repeated until convergence occurs or until the iteration is terminated. for n = 0, 1, 2,... E1: x n+1 = 1 +.5 sin x n E2: x n+1 = 3 + 2 sin x n

E1 E2 n x n x n 0 0.00000000000000 3.00000000000000 1 1.00000000000000 3.28224001611973 2 1.42073549240395 2.71963177181556 3 1.49438099256432 3.81910025488514 4 1.49854088439917 1.74629389651652 5 1.49869535552190 4.96927957214762 6 1.49870092540704 1.06563065299216 7 1.49870112602244 4.75018861639465 8 1.49870113324789 1.00142864236516 9 1.49870113350813 4.68448404916097 10 1.49870113351750 1.00077863465869 We show the results of the first 10 iterations in the table. Clearly convergence is occurring with E1, but not with E2. Why?

Fixed point iteration methods In general, we are interested in solving the equation by means of fixed point iteration: x = g(x) x n+1 = g(x n ), n = 0, 1, 2,... It is called fixed point iteration because the root α of the equation x g(x) = 0 is a fixed point of the function g(x), meaning that α is a number for which g(α) = α. The Newton method x n+1 = x n f (x n) f (x n ) is also an example of fixed point iteration, for the equation x = x f (x) f (x)

EXISTENCE THEOREM We begin by asking whether the equation x = g(x) has a solution. For this to occur, the graphs of y = x and y = g(x) must intersect, as seen on the earlier graphs. y y y = 3 + 2sin x y = 1 +.5sin x y = x y = x α x α x

Solution Existence Lemma: Let g(x) be a continuous function on the interval [a, b], and suppose it satisfies the property a x b a g(x) b (#) Then the equation x = g(x) has at least one solution α in the interval [a, b]. The proof of this is fairly intuitive. Look at the function f (x) = x g(x), a x b Evaluating at the endpoints, f (a) 0, f (b) 0 The function f (x) is continuous on [a, b], and therefore it contains a zero in the interval.

Examples Example 1. Consider the equation Here x = 1 + 0.5 sin x. g(x) = 1 + 0.5 sin x. Note that 0.5 g(x) 1.5 for any x R. Also, g(x) is a continuous function. Applying the existence lemma, we conclude that the equation x = 1 + 0.5 sin x has a solution in [a, b] with a 0.5 and b 1.5. Example 2. Similarly, the equation x = 3 + 2 sin x has a solution in [a, b] with a 1 and b 5.

Theorem Assume g(x) and g (x) exist and are continuous on the interval [a, b]; and further, assume a x b a g(x) b λ max g (x) < 1 a x b Then: S1. The equation x = g(x) has a unique solution α in [a, b]. S2. For any initial guess x 0 in [a, b], the iteration x n+1 = g(x n ), n = 0, 1, 2,... will converge to α. S3. α x n λn 1 λ x 1 x 0, n 0 S4. α x n+1 lim = g (α) n α x n Thus for x n close to α, α x n+1 g (α) (α x n ).

The following general result is useful in the proof. For any two points w and z in [a, b], g(w) g(z) = g (c) (w z) for some unknown point c between w and z. Therefore, g(w) g(z) λ w z (@) for any a w, z b. For S1, suppose there are two solutions α and β: α = g(α), β = g(β). By (@), α β = g(α) g(β) λ α β implying α β = 0 since λ < 1.

For S2, note that from (#), if x 0 is in [a, b], then x 1 = g(x 0 ) is also in [a, b]. Repeat the argument to show that every x n belongs to [a, b]. Subtract x n+1 = g(x n ) from α = g(α) to get α x n+1 = g(α) g(x n ) = g (c n ) (α x n ) ($) α x n+1 λ α x n (*) with c n between α and x n. From (*), we have that the error is guaranteed to decrease by a factor of λ with each iteration. This leads to α x n λ n α x 0, n 0 (%) Convergence follows from the condition that λ < 1.

For S3, use (*) for n = 0, α x 0 α x 1 + x 1 x 0 λ α x 0 + x 1 x 0 α x 0 1 1 λ x 1 x 0 Combine this with (%) to get the error bound. For S4, use ($) to write α x n+1 α x n = g (c n ) Since x n α and c n is between α and x n, we have g (c n ) g (α).

The statement α x n+1 g (α) (α x n ) tells us that when near to the root α, the errors will decrease by a constant factor of g (α). If g (α) is negative, then the errors will oscillate between positive and negative, and the iterates will be approaching from both sides. When g (α) is positive, the iterates will approach α from only one side. The statements α x n+1 = g (c n ) (α x n ) α x n+1 g (α) (α x n ) also tell us a bit more of what happens when g (α) > 1 Then the errors will increase as we approach the root rather than decrease in size.

Application of the theorem Look at the earlier example. First consider Here E1: x = 1 +.5 sin x g(x) = 1 +.5 sin x We can take [a, b] with any a 0.5 and b 1.5. Note that g (x) =.5 cos x, g (x) 1 2 Therefore, we can apply the theorem and conclude that the fixed point iteration x n+1 = 1 +.5 sin x n will converge for E1.

Application of the theorem (cont.) Then we consider the second equation E2: x = 3 + 2 sin x Here g(x) = 3 + 2 sin x Note that g(x) = 3 + 2 sin x, g (x) = 2 cos x g (α) = 2 cos (3.09438341304928) =. 1.998 Therefore the fixed point iteration will diverge for E2. x n+1 = 3 + 2 sin x n

Localized version of the theorem Often, the theorem is not easy to apply directly due to the need to identify an interval [a, b] on which the conditions on g and g are valid. So we turn to a localized version of the theorem. Assume x = g(x) has a solution α, both g(x) and g (x) are continuous for all x in some interval about α, and g (α) < 1 (**) Then for any sufficiently small number ε > 0, the interval [a, b] = [α ε, α + ε] will satisfy the hypotheses of the theorem. This means that if (**) is true, and if we choose x 0 sufficiently close to α, then the fixed point iteration x n+1 = g(x n ) will converge and the earlier results S1 S4 will all hold. The result does not tell us how close we need to be to α in order to have convergence.

NEWTON S METHOD Newton s method is a fixed point iteration with x n+1 = x n f (x n) f (x n ) g(x) = x f (x) f (x) Check its convergence by checking the condition (**). g (x) = 1 f (x) f (x) + f (x)f (x) [f (x)] 2 = f (x)f (x) [f (x)] 2 g (α) = 0 Therefore the Newton method will converge if x 0 is chosen sufficiently close to α.

HIGHER ORDER METHODS What happens when g (α) = 0? We use Taylor s theorem to answer this question. Begin by writing g(x) = g(α) + g (α) (x α) + 1 2 g (c) (x α) 2 with c between x and α. Substitute x = x n and recall that g(x n ) = x n+1 and g(α) = α. Also assume g (α) = 0. Then x n+1 = α + 1 2 g (c n ) (x n α) 2 α x n+1 = 1 2 g (c n ) (α x n ) 2 with c n between α and x n. Thus if g (α) = 0, the fixed point iteration is quadratically convergent or better. In fact, if g (α) 0, then the iteration is exactly quadratically convergent.

ANOTHER RAPID ITERATION Newton s method is rapid, but requires use of the derivative f (x). Can we get by without this? The answer is yes! Consider the method D n = f (x n + f (x n )) f (x n ) f (x n ) x n+1 = x n f (x n) D n This is an approximation to Newton s method, with f (x n ) D n. To analyze its convergence, regard it as a fixed point iteration with f (x + f (x)) f (x) D(x) = f (x) g(x) = x f (x) D(x) Then we can show that g (α) = 0 and g (α) 0. So this new iteration is quadratically convergent.

FIXED POINT ITERATION: ERROR Recall the result for the iteration Thus lim n with λ = g (α) and λ < 1. α x n α x n 1 = g (α) x n = g(x n 1 ), n = 1, 2,... α x n λ (α x n 1 ) (***) If we were to know λ, then we could solve (***) for α: α x n λx n 1 1 λ

Usually, we write this as a modification of the currently computed iterate x n : α x n λx n 1 1 λ = x n λx n 1 λ + λx n λx n 1 1 λ = x n + λ 1 λ (x n x n 1 ) The formula x n + λ 1 λ (x n x n 1 ) is said to be an extrapolation of the numbers x n 1 and x n. But what is λ? From we have lim n α x n α x n 1 = g (α) λ α x n α x n 1

Unfortunately this also involves the unknown root α which we seek; and we must find some other way of estimating λ. To calculate λ consider the ratio λ n = x n x n 1 x n 1 x n 2 To see this is approximately λ as x n approaches α, write x n x n 1 = g(x n 1) g(x n 2 ) = g (c n ) x n 1 x n 2 x n 1 x n 2 with c n between x n 1 and x n 2. As the iterates approach α, the number c n must also approach α. Thus λ n approaches λ as x n α.

Combine these results to obtain the estimation x n = x n + λ n 1 λ n (x n x n 1 ), λ n = x n x n 1 x n 1 x n 2 We call x n the Aitken extrapolate of {x n 2, x n 1, x n }; and α x n. We can also rewrite this as α x n x n x n = λ n 1 λ n (x n x n 1 ) This is called Aitken s error estimation formula. The accuracy of these procedures is tied directly to the accuracy of the formulas α x n λ (α x n 1 ), α x n 1 λ (α x n 2 ) If this is accurate, then so are the above extrapolation and error estimation formulas.

EXAMPLE Consider the iteration for solving x n+1 = 6.28 + sin(x n ), n = 0, 1, 2,... x = 6.28 + sin x Iterates are shown on the accompanying sheet, including calculations of λ n, the error estimate α x n x n x n = λ n 1 λ n (x n x n 1 ) (Estimate) The latter is called Estimate in the table. In this instance, g (α). =.9644 and therefore the convergence is very slow. This is apparent in the table.

n x n x n x n 1 λ n α x n Estimate 0 6.0000000 1.55E 2 1 6.0005845 5.845E 4 1.49E 2 2 6.0011458 5.613E 4.9603 1.44E 2 1.36E 2 3 6.0016848 5.390E 4.9604 1.38E 2 1.31E 2 4 6.0022026 5.178E 4.9606 1.33E 2 1.26E 2 5 6.0027001 4.974E 4.9607 1.28E 2 1.22E 2 6 6.0031780 4.780E 4.9609 1.23E 2 1.17E 2 7 6.0036374 4.593E 4.9610 1.18E 2 1.13E 2

AITKEN S ALGORITHM Step 1: Select x 0 Step 2: Calculate Step3: Calculate Step 4: Calculate x 1 = g(x 0 ), x 2 = g(x 1 ) x 3 = x 2 + λ 2 1 λ 2 (x 2 x 1 ), λ 2 = x 2 x 1 x 1 x 0 x 4 = g(x 3 ), x 5 = g(x 4 ) and calculate x 6 as the extrapolate of {x 3, x 4, x 5 }. Continue this procedure, ad infinatum. Of course in practice we will have some kind of error test to stop this procedure when believe we have sufficient accuracy.

EXAMPLE Consider again the iteration for solving x n+1 = 6.28 + sin(x n ), n = 0, 1, 2,... x = 6.28 + sin x Now we use the Aitken method, and the results are shown in the accompanying table. With this we have α x 3. = 7.98 10 4, α x 6. = 2.27 10 6 In comparison, the original iteration had α x 6. = 1.23 10 2

GENERAL COMMENTS Aitken extrapolation can greatly accelerate the convergence of a linearly convergent iteration x n+1 = g(x n ) This shows the power of understanding the behaviour of the error in a numerical process. From that understanding, we can often improve the accuracy, thru extrapolation or some other procedure. This is a justification for using mathematical analysis to understand numerical methods. We will see this repeated at later points in the course, and it holds with many different types of problems and numerical methods for their solution.