Root Finding For NonLinear Equations Bisection Method

Similar documents
Chapter 2 Solutions of Equations of One Variable

Romberg Integration and Gaussian Quadrature

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Solution of Algebric & Transcendental Equations

BEKG 2452 NUMERICAL METHODS Solution of Nonlinear Equations

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CURRICULUM MAP. Course/Subject: Honors Math I Grade: 10 Teacher: Davis. Month: September (19 instructional days)

Chapter P. Prerequisites. Slide P- 1. Copyright 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley

Numerical Methods. King Saud University

Numerical Methods in Informatics

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Today s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn

Academic Content Standard MATHEMATICS. MA 51 Advanced Placement Calculus BC

3.4. ZEROS OF POLYNOMIAL FUNCTIONS

5.3 SOLVING TRIGONOMETRIC EQUATIONS

The Growth of Functions. A Practical Introduction with as Little Theory as possible

Introductory Numerical Analysis

Simple Iteration, cont d

Algebra I Unit Report Summary

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Nonlinear Equations. Chapter The Bisection Method

Algebra I. CORE TOPICS (Key Concepts & Real World Context) UNIT TITLE

Chapter 2 Polynomial and Rational Functions

1.2. Indices. Introduction. Prerequisites. Learning Outcomes

MATH 1040 Objectives List

Root Finding (and Optimisation)

Chapter 1: Fundamentals of Algebra Lecture notes Math 1010

Learning Module 1 - Basic Algebra Review (Appendix A)

Chapter 7: Exponents

CHAPTER 10 Zeros of Functions

CME Project, Algebra Correlated to: Michigan High School Content Expectations, Algebra 1

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations

A Partial List of Topics: Math Spring 2009

8th Grade Math Definitions

Math 117: Calculus & Functions II

1.1 Basic Algebra. 1.2 Equations and Inequalities. 1.3 Systems of Equations

Numerical Methods Lecture 3

MATH 1A, Complete Lecture Notes. Fedor Duzhin

Essential Mathematics

Chapter 1. Root Finding Methods. 1.1 Bisection method

AP Calculus Chapter 9: Infinite Series

Zeros of Functions. Chapter 10

Algebra II Learning Targets

A video College Algebra course & 6 Enrichment videos

Chapter 1- Polynomial Functions

Solution of System of Linear Equations & Eigen Values and Eigen Vectors

Numerical Methods. Root Finding

Homework. Basic properties of real numbers. Adding, subtracting, multiplying and dividing real numbers. Solve one step inequalities with integers.


Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet

Questionnaire for CSET Mathematics subset 1

Interpolation. P. Sam Johnson. January 30, P. Sam Johnson (NITK) Interpolation January 30, / 75

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Mock Final Exam Name. Solve and check the linear equation. 1) (-8x + 8) + 1 = -7(x + 3) A) {- 30} B) {- 6} C) {30} D) {- 28}

College Algebra. Basics to Theory of Equations. Chapter Goals and Assessment. John J. Schiller and Marie A. Wurster. Slide 1

Calculus. Weijiu Liu. Department of Mathematics University of Central Arkansas 201 Donaghey Avenue, Conway, AR 72035, USA

Math 110 (S & E) Textbook: Calculus Early Transcendentals by James Stewart, 7 th Edition

Helping Students Understand Algebra

Final Exam C Name i D) 2. Solve the equation by factoring. 4) x2 = x + 72 A) {1, 72} B) {-8, 9} C) {-8, -9} D) {8, 9} 9 ± i

correlated to the Washington D.C. Public Schools Learning Standards Algebra I

Lecture 7: Indeterminate forms; L Hôpitals rule; Relative rates of growth. If we try to simply substitute x = 1 into the expression, we get

Bemidji Area Schools Outcomes in Mathematics Analysis 1. Based on Minnesota Academic Standards in Mathematics (2007) Page 1 of 5

MATHEMATICS Video List

Taylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.

Rational Exponents. Polynomial function of degree n: with leading coefficient,, with maximum number of turning points is given by (n-1)

ALGEBRA 1B GOALS. 1. The student should be able to use mathematical properties to simplify algebraic expressions.

Zeroes of Transcendental and Polynomial Equations. Bisection method, Regula-falsi method and Newton-Raphson method

Quantile Textbook Report

How to use these notes

Chapter 1. Functions 1.1. Functions and Their Graphs

Chapter 3: Root Finding. September 26, 2005

2 ways to write the same number: 6,500: standard form 6.5 x 10 3 : scientific notation

Analysis of California Mathematics standards to Common Core standards Algebra I

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

Chapter 8B - Trigonometric Functions (the first part)

Algorithms and Their Complexity

Algebra. Mathematics Help Sheet. The University of Sydney Business School

x y More precisely, this equation means that given any ε > 0, there exists some δ > 0 such that

Solutions of Equations in One Variable. Newton s Method

PUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES. Notes

Goals for This Lecture:

2. Limits at Infinity

Limits and Continuity

Developed in Consultation with Virginia Educators

SAMPLE QUESTIONS: ADVANCED MATH SKILLS ASSESSMENT (AMSA)

Logarithmic and Exponential Equations and Change-of-Base

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers

Core A-level mathematics reproduced from the QCA s Subject criteria for Mathematics document

Section-A. Short Questions

8th Grade The Number System and Mathematical Operations Part

Infinite series, improper integrals, and Taylor series

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Math 101 Study Session Spring 2016 Test 4 Chapter 10, Chapter 11 Chapter 12 Section 1, and Chapter 12 Section 2

Chapter 1 Mathematical Preliminaries and Error Analysis

function independent dependent domain range graph of the function The Vertical Line Test

CURRICULUM GUIDE. Honors Algebra II / Trigonometry

Secondary Honors Algebra II Objectives

8th Grade. The Number System and Mathematical Operations Part 2.

Transcription:

Root Finding For NonLinear Equations Bisection Method P. Sam Johnson November 17, 2014 P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 1 / 26

Introduction The growth of a population can be modeled over short periods of time by assuming that the population grows continuously with time at a rate proportional to the number present at that time. If we let N(t) denote the number at time t and λ denote the constant birth rate of the population, then the population satisfies the differential equation dn(t) = λn(t). dt The solution to this equation is N(t) = N 0 e λt, where N 0 denotes the initial population. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 2 / 26

This exponential model is valid only when the population is isolated, with no immigration. If immigration is permitted at a constant rate ν, then the differential equation becomes whose solution is dn(t) dt = λn(t) + ν, N(t) = N 0 e λt + ν λ (eλt 1). P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 3 / 26

Suppose a certain population contains, 1, 000, 000 individuals initially, that 435, 000 individuals immigrate into the community in the first year, and that 1, 564, 000 individuals are present at the end of one year. To determine the birth rate of this population, we must solve for λ in the equation 1, 564, 000 = 1, 000, 000e λ 435, 000 + (e λ 1). λ Numerical methods are used to approximate solutions of equations of this type, when the exact solutions cannot be obtained by algebraic methods. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 4 / 26

Babylon and the Square Root of 2 We consider first one of the most basic problems of numerical approximation, the root-finding problem. This process involves finding a root, or solution, of an equation of the form f (x) = 0, for a given function f. A root of this equation is also called a zero of the function f. The problem of finding an approximation to the root of an equation can be traced back at least as far as 1700 BC. A cuneiform table in the Yale Babylonian Collection dating from that period gives a sexigesimal (base-60) number equivalent to 1.414222 as an approximation to 2, a result that is accurate to within 10 5. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 5 / 26

Impressive Good Approximation to Square Root of 2 This is a Babylonian clay tablet from around 1700 BC. Its known as YBC7289, since its one of many in the Yale Babylonian Collection. Its a diagram of a square with one side marked as having length 1/2. They took this length, multiplied it by the square root of 2, and got the length of the diagonal. Since the Babylonians used base 60, they thought of 1/2 as 30/60. But since they hadnt invented anything like a decimal point, they wrote it as 30. More precisely, they wrote it as this: 1 + 24 60 + 51 60 2 + 10 60 3 1.41421297... This is an impressively good approximation to 2 1.41421356... P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 6 / 26

Algebraic Equations Mathematical statement of equality between algebraic expressions. An expression is algebraic if it involves a finite combination of numbers, variables and algebraic operations (addition, subtraction, multiplication, division, raising to a power). Two important types of such equations are linear equations, in the form y = ax + b, and quadratic equations, in the form y = ax 2 + bx + c. A solution is a numerical value that makes the equation a true statement when substituted for a variable. For example, x 5 3x + 1 = 0 is an algebraic equation with integer coefficients and y 4 + xy 2 = x3 3 xy 2 + y 2 1 7 is a multivariate polynomial equation over the rationals. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 7 / 26

Polynomial and Transcendental Equations A polynomial is an expression consisting of variables (or indeterminates) and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate (or variable) x, is x 2 4x + 7, which is a quadratic polynomial. Polynomials appear in a wide variety of areas of mathematics and science. Transcendental equation is an equation containing transcendental functions (for example, exponential, logarithmic, trigonometric, or inverse trigonometric functions) of the unknowns. Examples of transcendental equations are sin x + log x = x and 2x log x = arccos x. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 8 / 26

Approximations in Numerical Analysis Many algorithms in numerical analysis are iterative methods that produce a sequence {x n } of approximate solutions which, ideally, converges to a limit that is the exact solution as n approaches. Because we can only perform a finite number of iterations, we cannot obtain the exact solution, and we have introduced computational error. If our iterative method is properly designed, then this computational error will approach zero as n approaches. However, it is important that we obtain a sufficiently accurate approximate solution using as few computations as possible. Therefore, it is not practical to simply perform enough iterations so that the computational error is determined to be sufficiently small, because it is possible that another method may yield comparable accuracy with less computational effort. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 9 / 26

The total computational effort of an iterative method depends on both the effort per iteration and the number of iterations performed. Therefore, in order to determine the amount of computation that is needed to attain a given accuracy, we must be able to measure the error in n as a function of n. The more rapidly this function approaches zero as n approaches, the more rapidly the sequence of approximations {x n } converges to the exact solution, and as a result, fewer iterations are needed to achieve a desired accuracy. We now introduce some terminology that will aid in the discussion of the convergence behavior of iterative methods. Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 10 / 26

Let f and g be two functions defined on some subset of the real numbers. One writes f (x) = O(g(x)) as x if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f (x) is at most M multiplied by the absolute value of g(x). That is, f (x) = O(g(x)) if and only if there exists a positive real number M and a real number x 0 such that f (x) M g(x) for all x x 0. In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that f (x) = O(g(x)). P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 11 / 26

The notation can also be used to describe the behavior of f near some real number a (often, a = 0): we say f (x) = O(g(x)) as x a if and only if there exist positive numbers δ and M such that f (x) M g(x) for x a < δ. If g(x) is non-zero for values of x sufficiently close to a, both of these definitions can be unified using the limit superior: f (x) = O(g(x)) as x a if and only if lim sup x a f (x) g(x) <. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 12 / 26

Rate of Convergence As sequences are functions defined on N, the domain of the natural numbers, we can apply big-o notation to sequences. Therefore, this notation is useful to describe the rate at which a sequence of computations converges to a limit. Definition Let {x n } and {y n } be sequences that satisfy lim x n = x, lim y n = 0 n n where x is a real number. We say that {x n } converges to x with rate of convergence O(y n ) if there exists a positive real number M and a real number n 0 such that x n x M y n for all n n 0. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 13 / 26

We say that an iterative method converges rapidly, in some sense, if it produces a sequence of approximate solutions whose rate of convergence is O(y n ), where the terms of the sequence n approach zero rapidly as n approaches. Intuitively, if two iterative methods for solving the same problem perform a comparable amount of computation during each iteration, but one method exhibits a faster rate of convergence, then that method should be used because it will require less overall computational effort to obtain an approximate solution that is sufficiently accurate. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 14 / 26

Order of Convergence A sequence of iterates {x n : n 0} is said to converge with order p 1 to a point α if α x n+1 c α x n p, n 0 (1) for some c > 0, called asymptotic error constant. If p = 1, the sequence is said to converge linearly to α. In that case, we require c < 1; the constant c is called the rate of linear convergence of x n to α. In general, a sequence with a high order of convergence converges more rapidly than a sequence with a lower order. The asymptotic constant affects the speed of convergence but is not as important as the order. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 15 / 26

Taylor s Theorem Example { The sequences n+1 n+2 } { } 2n and 2 +4n converge to 1 and 2 with rates of n 2 +2n+1 convergence O(1/n) and O(1/n 2 ) respectively. Theorem Let f (x) have n + 1 continuous derivatives on [a, b], and let x 0, x = x 0 + h [a, b]. Then f (x 0 + h) = n k=0 f (k) (x 0 ) h k + O(h n+1 ). k! P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 16 / 26

Bisection Method The bisection method is based on the Intermediate Value Theorem. Suppose f is a continuous function defined on [a, b], with f (a) and f (b) of opposite sign. By the Intermediate Value Theorem, there exists a number α in (a, b) with f (α) = 0. Although the procedure will work when there is more than one root in the interval (a, b), we assume for simplicity that the root in this interval is unique. The method calls for a repeated halving of subintervals of [a, b] and, at each step, locating the half containing α. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 17 / 26

Algorithm To begin, set a 1 = a and b 1 = b, and let x 1 be the midpoint of [a, b]. That is, x 1 = a 1 + b 1 a 1 2 = a 1 + b 1 2 (first approximation). If f (x 1 ) = 0, then α = x 1, and we are done. If f (x 1 ) 0, then f (x 1 ) has the same sign as either f (a 1 ) or f (b 1 ). 1 When f (a 1 ) and f (x 1 ) have the same sign, α (x 1, b 1 ), and we set a 2 = x 1 and b 2 = b 1. 2 When f (a 1 ) and f (x 1 ) have opposite signs, α (a 1, x 1 ), and we set a 2 = a 1 and b 2 = x 1. We then reapply the process to the interval [a 2, b 2 ] to get second approximation p 2. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 18 / 26

Stopping Procedures We can select a tolerance ε > 0 and generate p 1, p 2,..., p N until one of the following conditions is met. p N p N 1 < ε, (2) f (p N ) < ε, (3) p N p N 1 p N < ε, p N 0. (4) Difficulties can arise when the first and second stopping criteria are used. For example, sequences (p n ) n=1 with the property that the differences p n p n 1 can converge to zero while the sequence itself diverges. It is also possible for f (p n ) to be close to zero while p n differs significantly from α. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 19 / 26

How to apply bisection algorithm? An interval [a, b] must be found with f (a).f (b) < 0. As at each step the length of the interval known to contain a zero of f is reduced by a factor of 2, it is advantageous to choose the initial interval [a, b] as small as possible. For, example, if f (x) = 2x 3 x 2 + x 1, we have both f ( 4).f (4) < 0 and f (0).f (1) < 0, so the bisection algorithm could be used on either on the intervals [ 4, 4] or [0, 1]. However, starting the bisection algorithm on [0, 1] instead of [ 4, 4] will reduce by 3 the number of iterations required to achieve a specified accuracy. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 20 / 26

Best Stopping Criterion Without additional knowledge about f and α, inequality p N p N 1 p N < ε, p N 0, is the best stopping criterion to apply because it comes closest to testing relative error. For example, the iteration process is terminated when the relative error is less then 0.0001; that is, when α p n α < 10 4. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 21 / 26

How to find the relative error bound? How to find α pn α, relative bound? The following theorem answers the question. Theorem Suppose that f C[a, b] and f (a).f (b) < 0. The bisection method generates a sequence (p n ) n=1 approimating a zero α of f with p n α b a 2 n, when n 1. The method has the important property that it always converges to a solution, and for that reason it is often used as a starter for the more efficient methods. But it is slow to converge. It is important to realize that the above theorem gives only a bound for approximation error and that this bound may be quite conservative. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 22 / 26

Number of Iterations Needed? To determine the number of iterations necessary to solve f (x) = x 3 + 4x 2 10 = 0 with accuracy 10 3 using a 1 = 1 and b 1 = 2 requires finding an integer N that satisfies p N α 2 N (b a) = 2 N < 10 3. A simple calculation shows that ten iterations will ensure an approximation accurate to within 10 3. Again, it is important to keep in mind that the error analysis gives only a bound for the number of iterations, and in many cases this bound is much larger than the actual number required. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 23 / 26

How to find an approximation which is correct to at least some s significant digits? Find the iteration number n 0 such that bn 0 an 0 a n0 1 10 s+1. Since α p n0 1 α b n 0 a n0 a n0 1 10 s+1, p n0 1 is an approximation which is correct to at least some s significant digits. As you have calculated p n0, so you can take p n0, which is a (good) approximation which is correct to at least some s significant digits. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 24 / 26

Order of Convergence Let p n denote the nth approximation of α in the algorithm. Then it is easy to see that α = lim α p n n p n [ ] 1 n (b a) (5) 2 where b a denotes the length of the original interval. From the inequality α p n 1 2 α p n 1 n 0, we say that the bisection method converges linearly with a rate of 1 2. The actual error may not decrease by a factor of 1 2 average rate of decrease is 1 2. at each step, but the P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 25 / 26

References Richard L. Burden and J. Douglas Faires, Numerical Analysis Theory ad Applications, Cengage Learning, New Delhi, 2005. Kendall E. Atkinson, An Introduction to Numerical Analysis, John Wiley & Sons, Delhi, 1989. P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 26 / 26