Intro to Scientific Computing: How long does it take to find a needle in a haystack?

Similar documents
Chapter 3: Root Finding. September 26, 2005

Numerical Methods Lecture 3

8.7 MacLaurin Polynomials

8.5 Taylor Polynomials and Taylor Series

ROOT FINDING REVIEW MICHELLE FENG

Solution of Algebric & Transcendental Equations

Root Finding (and Optimisation)

1 Question related to polynomials

Families of Functions, Taylor Polynomials, l Hopital s

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Math Numerical Analysis

Section 3.1 Quadratic Functions

V. Graph Sketching and Max-Min Problems

Excruciating Practice Final, Babson 21A, Fall 13.

Name: AK-Nummer: Ergänzungsprüfung January 29, 2016

X. Numerical Methods

Iterated Functions. Tom Davis August 2, 2017

Math 117: Calculus & Functions II

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

3.1 Day 1: The Derivative of a Function

III.A. ESTIMATIONS USING THE DERIVATIVE Draft Version 10/13/05 Martin Flashman 2005 III.A.2 NEWTON'S METHOD

CHAPTER 10 Zeros of Functions

Distance And Velocity

Zeros of Functions. Chapter 10

Solving Non-Linear Equations (Root Finding)

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Ma 530 Power Series II

AIMS Exercise Set # 1

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

1.1: The bisection method. September 2017

Numerical Methods. Root Finding

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Mathematics 1 Lecture Notes Chapter 1 Algebra Review

Numerical Solution of f(x) = 0

AP Calculus Chapter 9: Infinite Series

Section September 6, If n = 3, 4, 5,..., the polynomial is called a cubic, quartic, quintic, etc.

Math Numerical Analysis

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Chapter 2 Solutions of Equations of One Variable

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University

Finding the Roots of f(x) = 0

3.4 Introduction to power series

4.9 APPROXIMATING DEFINITE INTEGRALS

MATH 2053 Calculus I Review for the Final Exam

( 3) ( ) ( ) ( ) ( ) ( )

INFINITE SEQUENCES AND SERIES

Grade 8 Chapter 7: Rational and Irrational Numbers

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Extended Introduction to Computer Science CS1001.py. Lecture 8 part A: Finding Zeroes of Real Functions: Newton Raphson Iteration

Calculus 221 worksheet

Solutions to Math 41 Final Exam December 10, 2012

f(x) f(c) is the limit of numbers less than or equal to 0 and therefore can t be positive. It follows that

Ch 7 Summary - POLYNOMIAL FUNCTIONS

106 Chapter 5 Curve Sketching. If f(x) has a local extremum at x = a and. THEOREM Fermat s Theorem f is differentiable at a, then f (a) = 0.

Solutions to Problem Sheet for Week 8

1.4 Techniques of Integration

MATH 18.01, FALL PROBLEM SET #5 SOLUTIONS (PART II)

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

Sec 4.1 Limits, Informally. When we calculated f (x), we first started with the difference quotient. f(x + h) f(x) h

Calculus I. George Voutsadakis 1. LSSU Math 151. Lake Superior State University. 1 Mathematics and Computer Science

Midterm Review. Igor Yanovsky (Math 151A TA)

Limit. Chapter Introduction

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Section 5.8. Taylor Series

2 2xdx. Craigmount High School Mathematics Department

3. On the grid below, sketch and label graphs of the following functions: y = sin x, y = cos x, and y = sin(x π/2). π/2 π 3π/2 2π 5π/2

Introduction Derivation General formula List of series Convergence Applications Test SERIES 4 INU0114/514 (MATHS 1)

Taylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.

1 Limits and continuity

Analysis Methods in Atmospheric and Oceanic Science

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Math Numerical Analysis Mid-Term Test Solutions

Note: Final Exam is at 10:45 on Tuesday, 5/3/11 (This is the Final Exam time reserved for our labs). From Practice Test I

1.10 Continuity Brian E. Veitch

UNC Charlotte 2004 Algebra with solutions

dx dt = x 2 x = 120

Numerical Methods in Informatics

Solution of Nonlinear Equations

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Chapter 2. Motion in One Dimension. AIT AP Physics C

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Π xdx cos 2 x

GRE Math Subject Test #5 Solutions.

Introductory Numerical Analysis

14 Increasing and decreasing functions

Have a Safe and Happy Break

APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL

Let s Get Series(ous)

MT804 Analysis Homework II

Math /Foundations of Algebra/Fall 2017 Numbers at the Foundations: Real Numbers In calculus, the derivative of a function f(x) is defined

Chapter Five Notes N P U2C5

Polynomials. Henry Liu, 25 November 2004

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Chapter 2 notes from powerpoints

Final Examination 201-NYA-05 May 18, 2018

Chapter 4: Interpolation and Approximation. October 28, 2005

Transcription:

Intro to Scientific Computing: How long does it take to find a needle in a haystack? Dr. David M. Goulet Intro Binary Sorting Suppose that you have a detector that can tell you if a needle is in a haystack, but not where in the haystack it is. You could divide the stack into two smaller stacks and use your detector to figure out which of them the needle is in. Being clever, you decide to repeat this process again and again until the stacks are small enough for you to see the needle yourself. How many times will you need to apply your stack-halving algorithm before you find the needle? Root Finding Finding roots of polynomials is one of the fundamental problems in applied mathematics. You ve probably seen the quadratic equation, b 2 4ac, which gives the roots of the quadratic polynomial p b± 2a 2 = ax 2 + bx + c. What you may never have seen are the cubic and quartic equations, which give formulae for the roots of p 3 = ax 3 + bx 2 + cx + d and p 4 = ax 4 + bx 3 + cx 2 + dx + e. It is an interesting fact, one which required hundreds of years and the genius of Auguste Galois to prove, that there is no quintic formulae. In fact, Galois proved that there are no general formula for finding the roots of p k when k 5. Because of this, mathematicians have developed many clever ways to estimate the roots of polynomials. The Connection It may seem odd, but a certain class of methods for finding roots of polynomials is directly related to the needle problem. Each of these problems can be approached using what are called iterative numerical methods. Below we ll develop some of these methods, show how they can be used to solve important problems, and show how they can be efficiently implemented on a computer using Matlab. 1

Sorting The Bisection Method Suppose you have 16 objects and you want to separate them. One way to do this is to split them into piles of 8 and then split each of these into piles of 4 and each of those into piles of 2, etc. In this way, you can separate your 16 objects in 4 steps. (16) (8, 8) (4, 4, 4, 4) (2, 2, 2, 2, 2, 2, 2, 2) (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) In a similar way, you could separate a group of 2 n objects in n steps. If you start with a group of k objects, even if k is not a power of 2, you can apply a slight variation of this algorithm to separate your objects. If the number of objects in a group is even, then you can just split the group in half. If the number is odd, then you can put an extra one in either pile. For example (13) (6, 7) (3, 3, 3, 4) (1, 2, 1, 2, 1, 2, 2, 2) (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) In this way, a set of k objects can be separated in 1 + [log 2 k] steps. Here [x] is the largest integer x. Problems 1. How many steps does it take to separate 32 objects into groups of 4? 2. How many steps does it take to separate 23 objects into groups of less than 3? 2

3. If your haystack is originally a 10 meter cube, and you can t see the needle until the stack is a 1 centimeter cube, how many steps will the bisection method require? 4. If you have a set of spheres, cubes and pyramids each in three different colors, how many steps of the algorithm will guarantee that they are sorted into groups of the same geometry and color? 5. A king is placed somewhere on a chess board but you are not allowed to look at the board. You are allowed to pick a vertex and this vertex is defined as the origin of a coordinate system. You are then told which quadrant the king lies in. How many iterations of this quadsection method will guarantee that you find the king. Finding Roots The Intermediate Value Theorem says that if you have a continuous function f(x) on the interval x [a, b] and that f(a) < n and f(b) > n then there is some number c in the interval (a, b) such that f(c) = n. In other words, if a continuous function takes on values above and below some number, n, then it also takes on the value n. For example, f(x) = 2x 6, has the properties that f( 1) = 8 and f(10) = 14, so there is some number, c, in the interval [ 1, 10] so that f(c) = 2. In this case it is easy to show that f(4) = 2 4 6 = 2. Problems 1. Use the theorem to show that f(x) = x 2 1 has a root somewhere in the interval [0, 3]. 2. Use the theorem to show that f(x) = e x (x + 2) cos(x) has a root somewhere in the interval [0, π/2]. 3. If a runner finishes a 10 mile race in 60 minutes, was there a 1 mile section that the runner ran in exactly 6 minutes? 3

The Bisection Method Consider the polynomial p(x) = x 2 1. Notice that p(0) = 1 and that p(3) = 8. By the intermediate value theorem, there is some c [0, 3] so that p(c) = 0. We can use the bisection method to approximate this root as accurately as we want. The idea is to cut the interval in half and apply the intermediate value theorem to each half, then repeat. So, we split the interval into [0, 3/2] and [3/2, 3]. Notice that p(3/2) = 5/4 > 0, so there is a root on [0, 3/2] but not necessarily on [3/2, 3]. So we focus on [0, 3/2] and split it again, into [0, 3/4] and [3/4, 3/2]. Because p(3/4) = 8/16 < 0, we know that there is a root on [3/4, 3/2]. Notice that at each step the interval of interest is only half the size of the previous interval. So we are slowly closing in on the answer. If the original interval has length L, then after n steps the interval has length 2 n L. We may never arrive at the exact solution this way, but we can get closer and closer. In the example above, the interval is [3/4, 3/2]. The exact solution then must be close to the middle of this interval, 9/8. How close? The error can be no more than half the length of the interval. So our estimate is 9/8 ± 3/8. If we continue this process for 2 more steps, we get the estimate 33/32 ± 3/32. The error is now less than 1/10 and our estimate is 1.03125. This is close to the exact answer of 1. In general, if we start with an interval of length L, then after n steps the interval will have length 2 n L and our estimate will be the midpoint of this interval with an error no greater than 2 n+1 L. Example Notice that f(x) = x sin 2 x has f( π/2) = π/2 1 < 0 and f(π) = π > 0. So there is a root somewhere in ( π/2, π)]. We can use the bisection method to estimate this root accurate to 2 decimal places. The original interval has length 3π/2 so we need to choose n, the number of steps, so that 2 n (3π/2) < 0.01. We find n=9. Below is the sequence of midpoints of the intervals. So we know that 0.0031 is an approximation of the solution accurate to within 0.01. This agrees with the exact answer, which we know is 0. Problems 1. If f(0) = 1 and f(8) = 1, how many steps of the bisection method 4

0.7854... -0.3927... 0.1963... -0.0982... 0.0491... -0.0245... 0.0123... -0.0061... 0.0031... will be required to find an approximation to the root of f(x) accurate to 0.25? 2. You ve built a machine that shoots darts at a target. By aiming it at an angle θ = π/3, your first dart lands 10cm above the center of the bulls eye. You adjust the angle to θ = π/4, and your second dart lands 10cm below the center of the bulls eye. Can the bisection method be used to find the correct angle? Can we say with certainty how many steps will be required? 1 Numerical Methods The bisection method can be used to find roots of functions by following an algorithm. Consider the function f(x). 1. Find two values, a and b with a < b, so that f(a) and f(b) have opposite signs. 2. Define the midpoint of the interval m = (a + b)/2. 3. If f(m)=0, then stop because you ve found a root. If f(m) has the same sign as f(a), then there is a root on (m, b). If it has the same sign as f(b), then there is a root on (a, m). 4. Redefine a and b to be the endpoints of the new interval of interest. Repeat the above steps until the interval is as small as you desire. 1 This relates to solving a certain type of differential equation problem. In that context it is called, not surprisingly, a shooting method. 5

We can write this as a computer algorithm in Matlab. For example, if f(x) = 2x and a = 2 and b = 1, then we use the following code to find an approximation of the root accurate to 0.1. a=-2; b=1; error=0.1; while(b-a>2*error) m=(a+b)/2; f=2*m; if f=0 a=m; b=m; elseif a*f<0 b=m; else a=m; end end Or, for the example above with f(x) = x sin 2 x on [ π/2, π] if we want an approximation accurate to 0.01 we use the following code. a=-pi/2; b=pi; error=0.01; while(b-a>2*error) m=(a+b)/2; f=m-sin(m)^2; if f=0 a=m; b=m; elseif a*f<0 b=m; else a=m; end end 6

Problems 1. Approximate a root of the polynomial x 5 3x 4 6x 3 + 18x 2 + 8x 24 with an accuracy of at least 0.0001. From this, guess what the exact value is and then use this information to factor the polynomial. Then apply the algorithm again to find another root. Can you find all five roots this way? 2. Obviously f(x) = x 4 has a root at x = 0. Can the bisection method approximate this root? 3. The bisection method does not help when searching for the roots of f(x) = x 2 + 1. Why? Can you think of a way to modify the bisection method to find roots of f(x)? 1 Newton s Method f x n 1 f x n f x n 1 Figure 1: The geometry of Newton s method. Suppose that you make two initial guesses for the location of the root of f(x) = 0. Call these guesses x 1 and x 0. The figure suggests that if you draw a straight line through the points (x 0, f(x 0 )) and (x 1, f(x 1 )) then this new point x 2 will be closer to the root. The equation of the line through the first 7

two guesses is y = f(x 1) f(x 0 ) x 1 x 0 (x x 1 ) + f(x 1 ) Setting y = 0 allows us to solve for x 2 or 0 = f(x 1) f(x 0 ) x 1 x 0 (x 2 x 1 ) + f(x 1 ) x 2 = x 1 f(x 1)(x 1 x 0 ) f(x 1 ) f(x 0 ) This suggests an iterative scheme for finding roots. x n+1 = x n f(x n)(x n x n 1 ) f(x n ) f(x n 1 ) Here is an example of using this algorithm to find the root of f(x) = x 3 1 with initial guesses x 1 = 2 and x 2 = 0. x(1)=2; x(2)=0; error=1; n=2; while (error>0.01)&(n<100) x(n+1)=x(n)-(x(n)^3-1)*(x(n)-x(n-1))/(x(n)^3-x(n-1)^3); n=n+1; error=abs(x(n)^3-1) end This algorithm proceeds until f(x) 0.01 or until it has made 100 guesses, which ever comes first. The reason for putting a limit on the number of guesses is that the algorithm may not converge and so may not terminate. Note: If you ve learned Calculus, then you ll understand why is another useful iterative scheme. Method. x n+1 = x n f(x n) f (x n ) In this form, it is known as Newton s 8

16 x=1.0005±0.0016362 14 12 10 X 8 6 4 2 0 0 2 4 6 8 10 12 14 16 18 20 n Figure 2: The convergence of Newton s method for the root of x 3 1 = 0 1. Does this method appear to converge for f(x) = x 2 1, x 0 = 3, and x 1 = 2? 2. Can you think of a way to apply these ideas to higher dimensional problems? Apply your idea to finding the root of f(x, y) = x 2 + y 2. 1.1 Convergence What are some reasons that this scheme may or may not converge? Converge Firstly, we can write x n+1 x n = x n x n 1 f(x n ) f(x n ) f(x n 1 ) If we could could somehow be sure that the second term is always < 1, then we have a contraction mapping. A decreasing sequence, bounded below, converges. So lim n x n+1 x n exists. Moreover, x n+1 x n r x n x n 1 r 2 x n 1 x n 2... r n x 1 x 0 9

with r < 1. Therefor, the series x n = n 1 i=0 (x i+1 x i ) converges absolutely, and so converges. Therefor, in this case, lim n x n exists. Diverge Alternatively, suppose that this quantity is always > 1 then our mapping is an expansion, and so guaranteed to not converge. Suppose the algorithm gives guesses so that f(x n ) = f(x n 1 ). Division by zero is not defined, so the algorithm fails. f x n 1 f x n 1 f x n Figure 3: The failure of Newton s method. 1. Does this method appear to converge for f(x) = x 2 sin 2 x, x 0 = 1, and x 1 = 2? 2. Does this method appear to converge for f(x) = e x 2, x 0 = 1, and x 1 = 2? What about x 0 = 1 and x 1 = 2? 10

1.2 If you know some Calculus and physics... A pendulum problem The equation of motion of a pendulum is d 2 θ + (g/l) sin θ = 0 dt2 where g is the acceleration of gravity, l is distance of the center of mass from the pivot point, and θ is the angular deviation from the downward rest position. We have neglected all types of friction and other dissipative forces. Suppose that you are asked to give the pendulum a push from its rest position so that it reaches its highest point in exactly τ seconds. In this case you are solving θ + (g/l) sin θ = 0 θ(0) = 0 θ (0) = ω and then trying to figure out what the correct value of ω should be so that θ (τ; ω) = 0. This again can be solved by Newton s method. Suppose that we have the solution to the pendulum equation θ(t, ω). Then we want to know for what value of ω does θ (τ; ω) = 0. Calling the left hand side f(ω) we can apply Newton s method. The problem is, that we can t solve for θ explicitly, so this proposal isn t going to work. What can we do? You ll have to take the following on faith, but the best we can do is say that when it reaches its highest point we have and θ 0 ds ω2 + (2g/l)(cos(s) 1) = τ ω 2 + (2g/l)(cos(θ) 1) = 0 1. How can you combine these two ideas together with Newton s method to find the correct value of ω? 2. Given the plot of f(x) = x 0 ds, do you think Newton s cos(s) cos(x) method will converge? It may help to know that f(0 + ) = 149 60 2 1.756 11

6 5 4 3 2 1 Π 0 Π Figure 4: The function f(x) = x 0 ds. cos(s) cos(x) If you like, you can apply Newton s method to this problem using an integrator that I wrote. After creating a new m-file with the following function in it, you can use fintegral(x, 10000) to get 10 digits of accuracy for the function f(x). Incorporate this into your Newton s method routine. (If you understand how to use these strange computers, you can download the m-file instead of retyping it 2. Be warned that the way this is coded, you can t choose values for x outside of (0, π). Do you think that this will matter when using Newton s method? function y=fintegral(x,n) dt=x/n; t=0:dt:x; sum=0; for i=1:length(t)-2 sum=sum+.5*dt*( (cos(t(i))-cos(x))^(-.5) -((x-t(i))*sin(x))^(-.5) -....25*cot(x)*((x-t(i))/sin(x))^.5 + (cos(t(i+1))-cos(x))^(-.5)... -((x-t(i+1))*sin(x))^(-.5) -.25*cot(x)*((x-t(i+1))/sin(x))^.5 ); end y=sum+2*sqrt(x/sin(x))+(1/6)*cos(x)*(x/sin(x))^1.5 +....5*dt*( (cos(t(end-1))-cos(x))^(-.5) - ((x-t(end-1))*sin(x))^(-.5) -....25*cot(x)*((x-t(end-1))/sin(x))^.5); 2 http://www.math.utah.edu/ goulet/classes/mathcircle/fintegral.m 12