Accelerating Convergence

Similar documents
Fixed-Point Iteration

Review of Power Series

Neville s Method. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Neville s Method

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Order of convergence

Section 6.3 Richardson s Extrapolation. Extrapolation (To infer or estimate by extending or projecting known information.)

Linear Independence. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

FIXED POINT ITERATION

Row Space, Column Space, and Nullspace

Calculus and Parametric Equations

Interpolation and the Lagrange Polynomial

Solving Quadratic Equations

Numerical Methods

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,

Bessel s Equation. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Fall Department of Mathematics

Taylor and Maclaurin Series

Limit Theorems. MATH 464/506, Real Analysis. J. Robert Buchanan. Summer Department of Mathematics. J. Robert Buchanan Limit Theorems

Romberg Integration. MATH 375 Numerical Analysis. J. Robert Buchanan. Spring Department of Mathematics

Applied Mathematical Sciences, Vol. 6, 2012, no. 77, Oana Bumbariu

Continuity. MATH 161 Calculus I. J. Robert Buchanan. Fall Department of Mathematics

Equations in Quadratic Form

Chain Rule. MATH 311, Calculus III. J. Robert Buchanan. Spring Department of Mathematics

Absolute Convergence and the Ratio Test

SEC POWER METHOD Power Method

Numerical solutions of nonlinear systems of equations

Section 3.3 Product and Quotient rules 1 Lecture. Dr. Abdulla Eid. College of Science. MATHS 101: Calculus I

Properties of Linear Transformations from R n to R m

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Plane Curves and Parametric Equations

Vectors in the Plane

Limits and Continuity

Numerical Optimization

Fundamental Trigonometric Identities

Section 4.3 Concavity and Curve Sketching 1.5 Lectures. Dr. Abdulla Eid. College of Science. MATHS 101: Calculus I

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Series Solutions Near a Regular Singular Point

MAT137 Calculus! Lecture 45

PRACTICE PROBLEM SET

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Mean Value Theorem. MATH 161 Calculus I. J. Robert Buchanan. Summer Department of Mathematics

MATH 4211/6211 Optimization Basics of Optimization Problems

Mean Value Theorem. MATH 161 Calculus I. J. Robert Buchanan. Summer Department of Mathematics

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

MATH115. Sequences and Infinite Series. Paolo Lorenzo Bautista. June 29, De La Salle University. PLBautista (DLSU) MATH115 June 29, / 16

Fundamental Theorem of Calculus

Motion in Space. MATH 311, Calculus III. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Motion in Space

Math 2142 Homework 5 Part 1 Solutions

Absolute Convergence and the Ratio Test

Extrema of Functions of Several Variables

Chapter 3: Root Finding. September 26, 2005

Evaluating Determinants by Row Reduction

Infinite Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Exercise 2. Prove that [ 1, 1] is the set of all the limit points of ( 1, 1] = {x R : 1 <

Lecture 5. September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico

Section 8.7. Taylor and MacLaurin Series. (1) Definitions, (2) Common Maclaurin Series, (3) Taylor Polynomials, (4) Applications.

2. The Power Method for Eigenvectors

Fall 2016 Math 2B Suggested Homework Problems Solutions

Exponential Functions

Thus f is continuous at x 0. Matthew Straughn Math 402 Homework 6

ROOT FINDING REVIEW MICHELLE FENG

Stokes Theorem. MATH 311, Calculus III. J. Robert Buchanan. Summer Department of Mathematics. J. Robert Buchanan Stokes Theorem

LIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS

2.4 - Convergence of the Newton Method and Modified Newton Method

Applications Involving Rational Functions

Numerical Methods

Solutions of Equations in One Variable. Newton s Method

Conic Sections in Polar Coordinates

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Separable Differential Equations

Numerical Methods. Root Finding

Numerical Methods in Informatics

Simple Iteration, cont d

Math 126 Enhanced 10.3 Series with positive terms The University of Kansas 1 / 12

Arc Length and Surface Area in Parametric Equations

The Derivative of a Function Measuring Rates of Change of a function. Secant line. f(x) f(x 0 ) Average rate of change of with respect to over,

Hence, (f(x) f(x 0 )) 2 + (g(x) g(x 0 )) 2 < ɛ

Series Solutions Near an Ordinary Point

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Solving Systems of Linear and Quadratic Equations

Fixed point iteration Numerical Analysis Math 465/565

Root Finding For NonLinear Equations Bisection Method

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Math 320: Real Analysis MWF 1pm, Campion Hall 302 Homework 7 Solutions Please write neatly, and in complete sentences when possible.

Chapter 1. Root Finding Methods. 1.1 Bisection method

Systems of Ordinary Differential Equations

A first order divided difference

Implicit Differentiation and Inverse Trigonometric Functions

(4.2) -Richardson Extrapolation

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

Geometric Series and the Ratio and Root Test

Warm-up Simple methods Linear recurrences. Solving recurrences. Misha Lavrov. ARML Practice 2/2/2014

PLC Papers. Created For:

Nonlinear Integral Equation Formulation of Orthogonal Polynomials

Today s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn

MAT137 Calculus! Lecture 9

Transcription:

Accelerating Convergence MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013

Motivation We have seen that most fixed-point methods for root finding converge only linearly to a solution. Today we describe a technique which can be used accelerate the convergence of any linearly convergent sequence.

Linear Convergence Suppose sequence {p n } n=0 converges linearly to p.

Linear Convergence Suppose sequence {p n } n=0 converges linearly to p. p n+1 p 0 < lim = λ < 1 n p n p

Linear Convergence Suppose sequence {p n } n=0 converges linearly to p. p n+1 p 0 < lim = λ < 1 n p n p Assume the signs of p n+2 p, p n+1 p, and p n p are all the same.

Linear Convergence Suppose sequence {p n } n=0 converges linearly to p. p n+1 p 0 < lim = λ < 1 n p n p Assume the signs of p n+2 p, p n+1 p, and p n p are all the same. For large n then p n+2 p p n+1 p p n+1 p p n p.

Re-writing the Sequence (1 of 2) Solve for p. p n+2 p p n+1 p p n+1 p p n p (p n+1 p) 2 (p n+2 p)(p n p)

Re-writing the Sequence (1 of 2) Solve for p. p n+2 p p n+1 p p n+1 p p n p (p n+1 p) 2 (p n+2 p)(p n p) p p n+2p n p 2 n+1 p n+2 2p n+1 + p n

Re-writing the Sequence (2 of 2) p p n+2p n p 2 n+1 p n+2 2p n+1 + p n Add and subtract the terms p 2 n and 2p n p n+1 in the numerator.

Re-writing the Sequence (2 of 2) p p n+2p n p 2 n+1 p n+2 2p n+1 + p n Add and subtract the terms p 2 n and 2p n p n+1 in the numerator. p p2 n+1 + 2p np n+1 p 2 n + p n+2 p n + p 2 n 2p n p n+1 p n+2 2p n+1 + p n Now factor the numerator.

Re-writing the Sequence (2 of 2) p p n+2p n p 2 n+1 p n+2 2p n+1 + p n Add and subtract the terms p 2 n and 2p n p n+1 in the numerator. p p2 n+1 + 2p np n+1 p 2 n + p n+2 p n + p 2 n 2p n p n+1 p n+2 2p n+1 + p n Now factor the numerator. p (p n+1 p n ) 2 + p n (p n+2 + p n 2p n+1 ) p n+2 2p n+1 + p n (p n+1 p n ) 2 = p n p n+2 2p n+1 + p n

Aitken s 2 Method Define a new sequence {ˆp n } n=0 as ˆp n = p n We would like to show that (p n+1 p n ) 2 p n+2 2p n+1 + p n. lim ˆp n = p = lim p n, and n n sequence {ˆp n } n=0 converges to p faster than sequence {p n } n=0.

Aitken s 2 Method Define a new sequence {ˆp n } n=0 as ˆp n = p n We would like to show that (p n+1 p n ) 2 p n+2 2p n+1 + p n. lim ˆp n = p = lim p n, and n n sequence {ˆp n } n=0 converges to p faster than sequence {p n } n=0. We will come to call this iterative method as Aitken s 2 Method.

Example ( ) 1 Define p n = sin for n 1. Find the limit of this sequence n and compare its convergence to that of the sequence ˆp n.

Example ( ) 1 Define p n = sin for n 1. Find the limit of this sequence n and compare its convergence to that of the sequence ˆp n. n p n ˆp n 1 0.841471 0.216744 2 0.479426 0.159517 3 0.327195 0.122193 4 0.247404 0.098604 5 0.198669 0.082537 6 0.165896 0.070932 7 0.142372 0.062169 8 0.124675 0.055324 9 0.110883 10 0.099833

Forward Difference Definition Given sequence {p n } n=0, the forward difference denoted p n is defined as p n = p n+1 p n, for n 0. Higher powers of the operator are defined recursively as ) k p n = ( k 1 p n, for n 2.

Forward Difference (2 of 2) p n = p n+1 p n 2 p n = (p n+1 p n ) = p n+2 p n+1 (p n+1 p n ) = p n+2 2p n+1 + p n 3 p n = (p n+2 2p n+1 + p n ) = p n+3 3p n+2 + 3p n+1 p n.

Forward Difference (2 of 2) p n = p n+1 p n 2 p n = (p n+1 p n ) = p n+2 p n+1 (p n+1 p n ) = p n+2 2p n+1 + p n 3 p n = (p n+2 2p n+1 + p n ) = p n+3 3p n+2 + 3p n+1 p n. We can write Aitken s 2 Method as ˆp n = p n ( p n) 2 2 p n.

Convergence of Aitken s 2 Method Theorem Let {p n } n=0 be a sequence converging linearly to p with asymptotic error constant λ < 1. The sequence {ˆp n } n=0 converges to p faster than {p n } n=0 in the sense that ˆp n p lim n p n p = 0.

Proof (1 of 5) By assumption {p n } n=0 converges linearly to p, i.e. p n+1 p 0 < lim = λ < 1. n p n p Assume the signs of p k p are all the same for sufficiently large k.

Proof (1 of 5) By assumption {p n } n=0 converges linearly to p, i.e. p n+1 p 0 < lim = λ < 1. n p n p Assume the signs of p k p are all the same for sufficiently large k. Define a new sequence δ n = p n+1 p λ for n 0. Note that lim n δ n = 0. p n p

Proof (2 of 5) Consider ˆp n p p n p = 1 p n p ( p n ( p n) 2 2 p n p )

Proof (2 of 5) Consider ˆp n p p n p = 1 p n p = 1 p n p ( ) p n ( p n) 2 2 p p n ( ) p n p ( p n) 2 2 p n ( p n ) 2 = 1 (p n p) 2 p n

Proof (2 of 5) Consider ˆp n p p n p = 1 p n p = 1 p n p ( ) p n ( p n) 2 2 p p n ( ) p n p ( p n) 2 2 p n ( p n ) 2 = 1 (p n p) 2 p n (p n+1 p n ) 2 = 1 (p n p)(p n+2 2p n+1 + p n ) = 1 ((p n+1 p) (p n p)) 2 (p n p)((p n+2 p) 2(p n+1 p) + (p n p))

Proof (3 of 5) Recall p n+1 p p n p = δ n + λ p n+1 p = (δ n + λ)(p n p)

Proof (3 of 5) Recall p n+1 p p n p = δ n + λ p n+1 p = (δ n + λ)(p n p) This implies p 2 p = (δ 1 + λ)(p 1 p) p 3 p = (δ 2 + λ)(p 2 p) = (δ 1 + λ)(δ 2 + λ)(p 1 p). n p n+1 p = (p 1 p) (δ i + λ). i=1

Proof (4 of 5) Consequently ˆp n p p n p = 1 ((p n+1 p) (p n p)) 2 (p n p)((p n+2 p) 2(p n+1 p) + (p n p)) = 2 n n 1 (p 1 p) (δ i + λ) (p 1 p) (δ i + λ) i=1 i=1 1 ( n 1 n+1 n n 1 ) (p 1 p) (δ i + λ) (p 1 p) (δ i + λ) 2(p 1 p) (δ i + λ) + (p 1 p) (δ i + λ) i=1 = 1 n 1 (δ i + λ) = 1 i=1 n 1 i=1 i=1 i=1 ( n+1 i=1 n 1 i=1 n n 1 (δ i + λ) (δ i + λ) i=1 2 ) n n 1 (δ i + λ) 2 (δ i + λ) + (δ i + λ) i=1 2 (δ i + λ) (δ i + λ) 2 i=1 (δ n+λ 1) 2 ((δ n+1 +λ)(δ n+λ) 2(δ n+λ)+1) i=1 i=1

Proof (5 of 5) ˆp n p p n p = 1 = 1 n 1 i=1 n 1 i=1 2 (δ i + λ) (δ i + λ) 2 (δ n+λ 1) 2 ((δ n+1 +λ)(δ n+λ) 2(δ n+λ)+1) (δ n + λ 1) 2 (δ n+1 + λ)(δ n + λ) 2(δ n + λ) + 1

Proof (5 of 5) ˆp n p p n p = 1 = 1 n 1 i=1 Taking the limit as n : ˆp n p lim n p n p = lim n = 1 = 0 n 1 i=1 2 (δ i + λ) (δ i + λ) 2 (δ n+λ 1) 2 ((δ n+1 +λ)(δ n+λ) 2(δ n+λ)+1) (δ n + λ 1) 2 (δ n+1 + λ)(δ n + λ) 2(δ n + λ) + 1 [ 1 (δ n + λ 1) 2 ] (δ n+1 + λ)(δ n + λ) 2(δ n + λ) + 1 (λ 1)2 λ 2 2λ + 1

Steffensen s Method We can use Aitken s 2 Method to accelerate the convergence of any linearly convergent sequence generated by fixed-point iteration.

Steffensen s Method We can use Aitken s 2 Method to accelerate the convergence of any linearly convergent sequence generated by fixed-point iteration. Consider the fixed point problem g(x) = x and an initial approximation p 0. Calculate p 1 = g(p 0 ) p 2 = g(p 1 ) { ˆp 0 = 2} (p 0 ) p 4 = g(ˆp 0 ) p 5 = g(p 4 ) { ˆp 1 = 2} (ˆp 0 ) p 7 = g(ˆp 1 ) p 8 = g(p 7 ) { ˆp 2 = 2} (ˆp 1 )

Steffensen s Method We can use Aitken s 2 Method to accelerate the convergence of any linearly convergent sequence generated by fixed-point iteration. Consider the fixed point problem g(x) = x and an initial approximation p 0. Calculate p 1 = g(p 0 ) p 2 = g(p 1 ) { ˆp 0 = 2} (p 0 ) p 4 = g(ˆp 0 ) p 5 = g(p 4 ) { ˆp 1 = 2} (ˆp 0 ) p 7 = g(ˆp 1 ) p 8 = g(p 7 ) { ˆp 2 = 2} (ˆp 1 ) Every 3rd term is calculated using the 2 method, all other terms use fixed-point iteration.

Algorithm INPUT initial approximation p 0, tolerance ɛ, maximum iterations N. STEP 1 Set i = 1. STEP 2 While i N do STEPS 3 6. STEP 3 Set p 1 = g(p 0 ); p 2 = g(p 1 ); p = { 2} (p 0 ). STEP 4 If p p 0 < ɛ then OUTPUT p; STOP. STEP 5 Set i = i + 1. STEP 6 Set p 0 = p. STEP 7 OUTPUT Method failed after N iterations. ; STOP.

Example Use Steffensen s Method to approximate the solution to x 3 x = 0 for x [0, 1].

Example Use Steffensen s Method to approximate the solution to x 3 x = 0 for x [0, 1]. n Fixed-point Steffensen s 0 0.100000 0.100000 1 0.895958 0.580610 2 0.373697 0.547940 3 0.663287 0.547809 4 0.482538 0.547809

Final Result Remark: Steffensen s Method appears to generate a quadratically convergent sequence for root finding without requiring computation of a derivative.

Final Result Remark: Steffensen s Method appears to generate a quadratically convergent sequence for root finding without requiring computation of a derivative. Theorem Suppose g(x) = x has a solution p with g (p) 1. If there exists δ > 0 such that g C 3 [p δ, p + δ], the Steffensen s Method gives quadratic convergence for any p 0 [p δ, p + δ].

Homework Read Section 2.5. Exercises: 1, 2, 7, 9, 11