x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Similar documents
Maria Cameron. f(x) = 1 n

Chapter 3: Root Finding. September 26, 2005

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Scientific Computing: An Introductory Survey

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Numerical Methods in Informatics

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Simple Iteration, cont d

Line Search Methods. Shefali Kulkarni-Thaker

Nonlinear Equations. Chapter The Bisection Method

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Numerical Methods. Root Finding

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Introduction to Nonlinear Optimization Paul J. Atzberger

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Solution of Nonlinear Equations

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Math Numerical Analysis

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

ROOT FINDING REVIEW MICHELLE FENG

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

A NOTE ON Q-ORDER OF CONVERGENCE

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Math 273a: Optimization Netwon s methods

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Unconstrained minimization of smooth functions

NONLINEAR DC ANALYSIS

Nonlinear Equations. Not guaranteed to have any real solutions, but generally do for astrophysical problems.

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

c 2007 Society for Industrial and Applied Mathematics

Review of Classical Optimization

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

15 Nonlinear Equations and Zero-Finders

Goals for This Lecture:

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

Constrained Optimization Theory

Introductory Numerical Analysis

Solution of Algebric & Transcendental Equations

A Few Concepts from Numerical Analysis

Chapter 1. Root Finding Methods. 1.1 Bisection method

Math 320: Real Analysis MWF 1pm, Campion Hall 302 Homework 8 Solutions Please write neatly, and in complete sentences when possible.

Lecture V. Numerical Optimization

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 7: Solutions of nonlinear equations Newton-Raphson method

B553 Lecture 1: Calculus Review

Advanced Calculus I Chapter 2 & 3 Homework Solutions October 30, Prove that f has a limit at 2 and x + 2 find it. f(x) = 2x2 + 3x 2 x + 2

Unconstrained optimization

Solving Non-Linear Equations (Root Finding)

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

Numerical Analysis: Solving Nonlinear Equations

Newton s Method and Efficient, Robust Variants

Calculus. Mathematics FRDIS. Mendel university in Brno

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Numerical solutions of nonlinear systems of equations

Scientific Computing. Roots of Equations

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

Sequences. Chapter 3. n + 1 3n + 2 sin n n. 3. lim (ln(n + 1) ln n) 1. lim. 2. lim. 4. lim (1 + n)1/n. Answers: 1. 1/3; 2. 0; 3. 0; 4. 1.

Single Variable Minimization

Numerical Methods for Differential Equations

17 Solution of Nonlinear Systems

Numerical Methods Lecture 3

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Classical regularity conditions

1.2 Functions What is a Function? 1.2. FUNCTIONS 11

Numerical Methods I Solving Nonlinear Equations

MATH3283W LECTURE NOTES: WEEK 6 = 5 13, = 2 5, 1 13

MATH 103 Pre-Calculus Mathematics Test #3 Fall 2008 Dr. McCloskey Sample Solutions

Lecture 7 Unconstrained nonlinear programming

Line Search Techniques

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Solutions of Equations in One Variable. Newton s Method

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Chapter 2: Functions, Limits and Continuity

MATH 411 NOTES (UNDER CONSTRUCTION)

Barrier Method. Javier Peña Convex Optimization /36-725

Numerical Analysis: Interpolation Part 1

Applied Computational Economics Workshop. Part 3: Nonlinear Equations

Fixed point iteration Numerical Analysis Math 465/565

Complex Analysis Homework 9: Solutions

1 Numerical optimization

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Definitions & Theorems

CHAPTER-II ROOTS OF EQUATIONS

Numerical Sequences and Series

Computational Methods. Solving Equations

Numerical Methods in Physics and Astrophysics

Newton s Method. Javier Peña Convex Optimization /36-725

A short presentation on Newton-Raphson method

Transcription:

Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written as r 1 (x 1, x 2,..., x n ) = 0 r 2 (x 1, x 2,..., x n ) = 0... r n (x 1, x 2,..., x n ) = 0. If the components f j of F are differentiable, we define the Jacobian matrix r 1 r 1 r x 1 x 2... 1 x n r 2 r 2 r (2) J(x) = x 1 x 2... 2 x n.... r n x 2... r n x 1 Throughout the rest of the semester we will use the following theorem of calculus as working tool for proving convergence theorems. Theorem 1. Let r : R n R m be differentiable in an open domain Ω R n and let x Ω. Then for all x Ω sufficiently close to x (3) r(x) r(x ) = 1 0 r n x n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable. 1.1. Some classic methods. 1.1.1. Bisection method. Suppose f(x) is continuous on [a, b] and f(a)f(b) < 0. Then there exists at least one zero of f in (a, b). The bisection method finds the root of as follows. Input: a, b, tolerance ɛ Output: x, a root of f(x) in (a, b) Do Set c = (a + b)/2 If f(c) = 0 return c Else if f(a)f(c) < 0 set b = c Else set a = c While b a ɛ Set x = c f(x) = 0 1

2 This algorithm is robust and globally convergent. It does not require more smoothness from f than continuity. Its shortcoming is that it is slow. The error bound, x c reduces just by the factor of 2 with each iteration. 1.1.2. Fixed point methods for a single equation. The basic idea of the fixed point methods consists in finding an iteration function T (x) such that (i) the zero x of f(x) satisfies T (x ) = x, and (ii) T (x) generates successive approximations to the solution x n+1 = T (x n ) starting from the provided initial approximation x 0. Theorem 2. Let T (x) be a continuous and differentiable function in [a, b] such that T ([a, b]) [a, b] and T (x) M < 1, x [a, b]. Then there exists a unique x [a, b] such that T (x ) = x and for any x 0 [a, b] the sequence x n+1 = T (x n ), n = 0, 1, 2,..., converges to x. The error after the n-th iteration is bounded by x n x M n 1 M x 1 x 0. Prior to proving this theorem let us see what sequences x n+1 = T (x n ) are generated in the case of linear map T. For different cases are illustrated in Fig. 1. We see that if T < 1 the sequence converges to the fixed point, while if T > 1 is diverges. If T > 0, the sequence is monotone, while if T < 0 the sequence oscillates. Proof. (1) Uniqueness Suppose there are two fixed points: y = T (y) and z = T (z). Then applying the mean value theorem we obtain y z = T (y) T (z) = T (ζ) y z M y z < y z, a contradiction. Hence there is at most one fixed point in[a, b]. (2) Existence Let us show that the sequence x n+1 = T (x n ) is Cauchy. We have x n+k x n = T (x n+k 1 ) T (x n 1 ) M x n+k 1 x n 1... M n x k x 0 M n b a. Hence this difference can be made arbitrarily small as soon as n is large enough. Hence this sequence is Cauchy and hence it converges. (3) Error bound Let x be the fixed point. x n x = T (x n 1 T (x ast ) M x n 1 x... M n x 0 x. On the other hand, Hence, x 0 x x 0 x 1 + x 1 x x 0 x 1 + M x 0 x. (1 M) x 0 x x 0 x 1.

3 (a) (b) (c) (d) Figure 1. Examples of sequences x n+1 = T (x n ). Therefore we get the following error bound: x n x M n x 0 x M n 1 M x 0 x 1. 1.1.3. Newton-Raphson method. The idea of the Newton-Raphson method for solving f(x) = 0 is to plot a a tangent line to the graph of y = f(x) through the point (x n, f(x n )) where x n is the current approximation and set x n+1 to its intersection with the x axis. The equation

4 of the tangent line is y = f(x n ) + f (x n )(x x n ). Hence Therefore the iteration function is x n+1 = x n f(x n) f (x n ). T (x) := x f(x) f (x). As is well-know, this method is not globally convergent or its behavior can be hard to predict if the initial guess is not close enough to the solution. However, it converges rapidly for a close enough initial approximation. To quantify this, we consider T (x) = 1 f (x) f (x) + f(x)f (x) (f (x)) 2 = f(x)f (x) (f (x)) 2. Therefore, if f (x ) 0, T (x) 0 as x x. Hence T (x) < 1 in some neighborhood of x which guarantees that the Newton-Raphson method converges. 1.2. Rates of convergence. The performance of any zero-finding or optimization algorithm is characterized by (i) its ability to find a solution (global convergence or local convergence) and by (ii) rate of convergence. The conceptually simplest definition of the rate of convergence is the following Definition 1. Let {x n } n=0 be a sequence which converges to x and such that x n x for n N. We will say that the sequence converges with Q-order p 1 if there exists a constant C > 0 such that (4) lim n e n+1 e n p = C, where e n = x n x, n N. C is called the asymptotic error constant. Q stands for quotient. Now we determine the order of convergence for a fixed point method. We define and consider the Taylor expansion e n := x n x x n+1 = T (x n ) = T (x + e n ) = T (x ) + T (x )e n + 1 2 T (x )e 2 n +.... If T (p) (x ) is the first non-vanishing derivative of T at x we have Hence e n+1 = T (p) (x ) e p p! n. lim e n+1 e n = T (p) (x ). n p! In this case we say that the method has Q-order of convergence p. Q stands for quotient.

For the Newton-Raphson method, T (x ) = 0. T (x ) = f (x )f (x ) (f (x )) 2 = f (x ) (f (x ). Hence, if f (x ) 0, the Newton-Raphson method converges Q-quadratically, and the asymptotic error constant is C = f (x ) f (x ). Exercise Show that if f(x ) = 0 and f (x ) = 0 them the Newton-Raphson method converges Q-linearly and the asymptotic error constant is 1 2. Definition 1 is somewhat restrictive. What if our method makes a more significant progress every second step or each time when we do some kind of reinitialization? To characterize these cases, Definition 1 has been generalized to Definition 2. Let {x n } n=0 be a sequence which converges to x and such that x n x for n N. We will say that the sequence converges with Q-order p 1 if there exists a constant C > 0 such that (5) x n+1 x C x n x r for sufficiently large n. In particular, x n x Q-linearly if (6) x n+1 x C x n x for sufficiently large n. We say that x n x Q-superlinearly x n+1 x (7) lim n x n x = 0. We say that x n x Q-quadratically if there exists a constant C > 0 such that (8) x n+1 x C x n x 2 for sufficiently large n. The prefix Q is often omitted. It has been introduced to distinguish the convergence described in Definition 2 from a weaker form of convergence called R-convergence. R stands for root. This kind of rate is concerned with overall rate of decrease of the error rather than the decrease over a single step of the algorithm. Definition 3. We say that the sequence converges R-linearly to x if there exists a sequence of nonnegative numbers {ν n } n=0 such that and ν n 0 Q-linearly. x n x ν n, n N Similarly we can define R-superlinear convergence and R-convergence of order r. This definition will be useful for the case if the convergence is non-monotone, or from time to time the method fails to make progress over a step. 5

6 Example The sequence x n = 2 n, n is even, x n = 2 n 1, n is odd, converges R-linearly to zero. This sequence is 1 1, 4, 1 4, 1 16, 1 16, 1 64, 1 64,.... As you see, every second step we fail to make progress toward zero. 1.3. Fixed point iteration in R n. Definition 4. Let Ω R n and let T : Ω R n. T is Lipschitz-continuous on Ω with constant M if T (x) T (y) M x y. T is a contraction if M < 1. The standard result for the fixed point iteration is the Contraction Mapping Theorem. Theorem 3. Let Ω be a closed subset of R n and let T be a contraction with constant M < 1 such that T (Ω) Ω. Then there exists a unique fixed point x Ω. Then the sequence of iterates x n+1 = T (x n ) converges Q-linearly with asymptotic error constant M to x for all initial iterates x 0. The proof of this theorem repeats the proof to Theorem 2. References [1] A. Gil, J. Segura, N. Temme, Numerical Methods for Special Functions, SIAM, 2007 [2] J. Nocedal, S. Wright, Numerical Optimization, Springer, 1999 [3] C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, SIAM, 1995.