Root Finding (and Optimisation)

Similar documents
Practical Numerical Analysis: Sheet 3 Solutions

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

CHAPTER-II ROOTS OF EQUATIONS

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Chapter 3: Root Finding. September 26, 2005

Scientific Computing: An Introductory Survey

Edexcel GCE Further Pure Mathematics (FP1) Required Knowledge Information Sheet. Daniel Hammocks

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Solution of Nonlinear Equations

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Solution of Algebric & Transcendental Equations

Math Numerical Analysis Mid-Term Test Solutions

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Numerical Methods. Root Finding

Zeroes of Transcendental and Polynomial Equations. Bisection method, Regula-falsi method and Newton-Raphson method

Announcements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook!

Solving Non-Linear Equations (Root Finding)

Chapter 1. Root Finding Methods. 1.1 Bisection method

Polynomial Functions

CS 323: Numerical Analysis and Computing

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Determining the Roots of Non-Linear Equations Part I

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

Analysis Methods in Atmospheric and Oceanic Science

SOLVING EQUATIONS OF ONE VARIABLE

Chapter III. Unconstrained Univariate Optimization

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Numerical Analysis: Solving Nonlinear Equations

ROOT FINDING REVIEW MICHELLE FENG

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Mathematical Focus 1 Complex numbers adhere to certain arithmetic properties for which they and their complex conjugates are defined.

Caculus 221. Possible questions for Exam II. March 19, 2002

UNIT II SOLUTION OF NON-LINEAR AND SIMULTANEOUS LINEAR EQUATION

Computational Methods. Solving Equations

Goals for This Lecture:

Intro to Scientific Computing: How long does it take to find a needle in a haystack?

Scilab. Prof. S. A. Katre Dept. of Mathematics, University of Pune

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

INTRODUCTION TO NUMERICAL ANALYSIS

Appendix 8 Numerical Methods for Solving Nonlinear Equations 1

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

Maths Class 11 Chapter 5 Part -1 Quadratic equations

Chapter 8. Exploring Polynomial Functions. Jennifer Huss

15 Nonlinear Equations and Zero-Finders

The iteration formula for to find the root of the equation

Today s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

CS 323: Numerical Analysis and Computing

Example - Newton-Raphson Method

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

5 Finding roots of equations

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Optimization and Calculus

Algebra III Chapter 2 Note Packet. Section 2.1: Polynomial Functions

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

NON-LINEAR ALGEBRAIC EQUATIONS Lec. 5.1: Nonlinear Equation in Single Variable

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

17 Solution of Nonlinear Systems

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Newton s Method and Linear Approximations

A first order divided difference

Mathematics, Algebra, and Geometry

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL

Midterm Review. Igor Yanovsky (Math 151A TA)

MALLA REDDY ENGINEERING COLLEGE

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

Introductory Numerical Analysis

Root Finding For NonLinear Equations Bisection Method

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Solving Nonlinear Equations

Analysis Methods in Atmospheric and Oceanic Science

Simpson s 1/3 Rule Simpson s 1/3 rule assumes 3 equispaced data/interpolation/integration points

Scientific Computing. Roots of Equations

Numerical Methods Lecture 3

Families of Functions, Taylor Polynomials, l Hopital s

Newton s Method and Linear Approximations 10/19/2011

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Root Finding Methods

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

MA4001 Engineering Mathematics 1 Lecture 15 Mean Value Theorem Increasing and Decreasing Functions Higher Order Derivatives Implicit Differentiation

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

CHAPTER 4 ROOTS OF EQUATIONS

Program : M.A./M.Sc. (Mathematics) M.A./M.Sc. (Final) Paper Code:MT-08 Numerical Analysis Section A (Very Short Answers Questions)

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Justify all your answers and write down all important steps. Unsupported answers will be disregarded.

Numerical Methods in Physics and Astrophysics

Transcription:

Root Finding (and Optimisation) M.Sc. in Mathematical Modelling & Scientific Computing, Practical Numerical Analysis Michaelmas Term 2018, Lecture 4

Root Finding The idea of root finding is simple we want to find x such that f(x) = 0. Actually finding such roots is not so simple. For example if f (x) is a cubic polynomial then ( x = 3 b 3 27a 3 + bc 6a 2 d 2a ( + 3 b 3 27a 3 + bc 6a 2 d 2a b 3a. f (x) = ax 3 + bx 2 + cx + d, ) ( b 3 + 27a 3 + bc 6a 2 d ) 2 ( ) 3 c + 2a 3a b2 9a 2 ) ( b 3 27a 3 + bc 6a 2 d ) 2 ( ) 3 c + 2a 3a b2 9a 2

Root Finding If f (x) is a quintic polynomial f (x) = ax 5 + bx 4 + cx 3 + dx 2 + ex + f, then there is a closed form solution for the roots iff its Galois group is solvable (Galois, 1846). Closed form solutions to the general root-finding problem may: not exist; be unstable; require evaluation of special functions. In such cases numerical methods help!

Relation to Optimisation We know that if we want to maximise or minimise a function g(x) then the maximum or minimum occurs at a turning point of g (or on the boundary of the domain) so that f i := g x i = 0. In other words, we have a root finding problem, f(x) = 0.

Univariate Root Finding

Bisection Algorithm In 1D the problem is: Given f : [a, b] R, find c [a, b] such that f (c) = 0. The bisection algorithm relies on the intermediate value theorem which states: If f : [a, b] R is continuous and f (a) < u < f (b) or f (b) < u < f (a) then c (a, b) such that f (c) = u. Thus, to find a root of f (x) we find a and b such that f (a) and f (b) have opposite sign. We set c = (a + b)/2, compute f (c) and then repeat on the half of the interval where the sign changes.

Bisection Algorithm 2 f(b)>0 1.5 1 f(c)>0 0.5 0-0.5 f(a)<0-1 0 0.2 0.4 0.6 0.8 1

Bisection Algorithm We need to decide when to terminate the algorithm the normal choices are one of: b a < tol (solution is only changing by a very small amount); f (c) < tol (residual is very small). For convergence, since c [a, b] the maximum error in c is b a (i.e. the width of the interval). Since each step halves the length of the interval, the error at the nth iteration is c n c b a 2 n. Thus to achieve an accuracy of tol requires n = steps of the bisection algorithm. log(b a) log(tol) log(2)

Regula Falsi The regula falsi algorithm is similar to the bisection algorithm except that it uses a potentially more intelligent guess for c. Suppose we approximate the function f on [a, b] by its linear interpolant Then g(x) has a root at g(x) = x a b a f (b) + b x b a f (a). c = af (b) bf (a) f (b) f (a). The regula falsi algorithm uses this as c in the bisection algorithm.

Regula Falsi Example When it Works Well (x-0.25)(4-x) 2 1.5 1 0.5 0-0.5-1 0 0.2 0.4 0.6 0.8 1 Blue dots correspond to standard bisection algorithm, red dots to regula falsi.

Regula Falsi Example When it Works Badly Convergence can be slow when the curvature of f is large (when the linear approximation is not good). 0.9 x 10-1/10 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0-0.1 0 0.2 0.4 0.6 0.8 1 Blue dots correspond to standard bisection algorithm, red dots to regula falsi.

Regula Falsi Fix There is a simple fix. If the algorithm goes right twice then halve f (b) so c = af (b)/2 bf (a) f (b)/2 f (a). If the algorithm goes left twice then halve f (a). This method is sometimes called the Illinois algorithm.

Newton-Raphson Suppose we approximate f (x) by g(x) which is the truncated Taylor series of f about the point x k so g(x) = f (x k ) + (x x k )f (x k ). Then g(x) has a root at c given by c = x k f (x k) f (x k ). Thus the Newton-Raphson method requires an initial guess x 0 and then iterates until convergence is achieved. x k+1 = x k f (x k) f (x k ), If the initial guess is sufficiently close to the solution, the algorithm has quadratic convergence.

Newton-Raphson Modifications The iterate for the secant method is x k x k 1 x k+1 = x k f (x k ) f (x k ) f (x k 1 ). This avoids the need to compute f (x k ). For damped Newton we use where σ k (0, 1]. x k+1 = x k σ k f (x k ) f (x k ), A third order variation is Halley s method: x k+1 = x k 2f (x k )f (x k ) 2(f (x k )) 2 f (x k )f (x k ). This works if the roots x satisfy f (x ) 0 and is then Newton s method applied to the function f (x)/ f (x).

Multivariate Root Finding

Higher Dimensions The problem is now find x such that f(x) = 0 where f : R n R n. Algorithms based on the bisection algorithm are not extendable to higher dimensions. Algorithms based on Newton s method are!

Newton s Method Again we approximate f(x) by the first terms in its Taylor series: f(x + δx) f(x) + J(x)δx, where J(x) is the Jacobian evaluated at the point x, J = f 1 f 1 x 1 f 2 f 2 x 1. f n x 1 x 2... x 2.... f n x 2... f 1 x n f 2 x n. f n x n. In order to get the Newton iterate we solve f(x + δx) = 0, i.e. so that J(x)δx = f(x), J(x k )(x k+1 x k ) = f(x k ).

Newton s Method We can also write the Newton iterate as x k+1 = x k (J(x k )) 1 f(x k ). Thus a damped version is possible as in 1D: x k+1 = x k σ k (J(x k )) 1 f(x k ).

Newton s Method and Newton Fractals One of the problems with using Newton s method is choosing an initial guess. Generally root finding problems have many solutions (think of polynomials) and different initial guesses lead to different solutions with different numbers of iterations. Newton fractals illustrate this behaviour in the complex plane.

Newton Fractals For example, suppose we consider the function f (z) = z 3 1. This has three roots at z = 1 and z = ( 1 ± 3ı)/2. By writing z = x + ıy and equating real and imaginary parts we get a system x 3 3xy 2 1 = 0 3x 2 y y 3 = 0 which can be solved using Newton s method. For each point (x, y) [ 2, 2] 2 we compute a solution using Newton s method and record which root it converges to. A Newton fractal is a plot of this information. Shading using the number of iterations for convergence is also possible.

Newton Fractal Example Newton fractal for f (z) = z 3 1.