Example - Newton-Raphson Method
|
|
- Lawrence Cox
- 5 years ago
- Views:
Transcription
1 Eample - Newton-Raphson Method We now consider the following eample: minimize f( Since f ( and f ( we form the following iteration: + n 3 ( n 3 3( n n ( n 2 n ( n 2 6 n 3 n 2 n We can now try the iteration for different initial guesses. initial guess: ( 0 0 2( n 2 n n 2 initial guess: initial guess: ( ( 0.9 ( 0 ( stationary point ( 0 ( stationary point (.029 ( initial guess: ( 0.0 ( ( stationary point We can now evaluate f( at the two stationary points we ve found: f( 0 0, f( 4 and since f ( 3 > 0 local minimum therefore * local minimum. In fact you can convince yourself that it is the global minimum. (Note that the global maimum is at * ±. (.0
2 Interval Reduction Using Function Samples Only initial bracketed interval I 0 [a, b] f f 2 a c d b consider f : f ( d < f ( c I [,] cb, 2 consider f 2 : f 2 ( d > f 2 ( c I [,] ad. a L m u b Algorithm: Interval Halving Search. input a, b, min // input the initial interval 2. set: b a, m a + 2, f m f ( m 3. while > min repeat 4. set: L a + 4, f L f ( L 5. u b 4, f u f ( u 6. if f L < f m then 7. set b m, m L, f m f L, b a 8. else if f u < f m 9. set a m, m u, f m f u, b a 0. else. set a L, b u, b a 2. end if 3. end if 4. end while 5. output [a, b] // final interval size is less than min After 2n+ function evaluation the interval is reduced by I 2n + I 0 ( 2 n, n 23,, where I o is the size of the initial interval.
3 Fibonacci Search Technique Question How should we pick N successive points in an interval to perform function evaluations such that the final reduced interval is the smallest possible irrespective of the properties of f(? initial interval: [0, 2] new interval: [0, +δ] 0 +δ 2 Best location for two function evaluations N 3 initial interval: [0, 3] final interval: [, 2] 0 +δ 2 3 N 4 initial interval: [0, 5] final interval: [4, 5] δ 5 N 5 initial interval: [0, 8] final interval: [4, 5] δ 5 8 Eamples of optimal sampling locations Fibonacci sequence of numbers: F 0 F, F i F i + F i 2, i 2 (. Given an interval of proportion F i and choosing two sample points at F i 2, F i.
4 initial interval: [0, F i ] new interval: [0, F i- ] or [F i-2, F i ] F i- 0 F i-2 F i The new interval will be either Fibonacci type sampling [ 0, F i ] or [ F i 2, F i ] the first has size F i, the second has size F i F i 2 F i and therefore they are the same size. The net point is chosen as either F i 3 or F i F i 3. The interval reduction is equal to I N I 0 F N after N function evaluations. It has been shown that this is the best possible reduction for a given number of function evaluations. At each step the reduction is equal to F i F i +. The drawback of this procedure is that you must start with F N and work backwards down the sequence. Golden Section Search Given a normalized interval [ 0, ] where should two points and 2 be chosen such that: size of reduced interval is independent of function. 2 only one function evaluation per interval reduction. if f 2 < f then new interval is [0, ] 0 2 if f < f 2 then new interval is [ 2, ] 0 2 Golden section interval reduction, initial interval is [0, ].
5 Therefore by condition above 0 2 or 2 and by condition from τ θ τ τ 2 where we call τ the golden section and θ the golden mean. After each function evaluation and comparison θ of the interval is eliminated or τ of the original interval remains. Thus after N function evaluations an interval of original size L will be reduced to a size Lτ N Algorithm: Golden Section search. input a, b, tol 2. set c a + ( τ ( b a, F c Fc ( 3. d b ( τ ( b a, F d Fd ( 4. for K 2 repeat,, 5. if F c < F d then 6. set a k + a k, b k + d k, d k + c k 7. c k + a k + + ( τ ( b k + a k + 8. F d F c, F c Fc ( k + 9. else 0. set a k + c k, b k + b k, c k + d k. d k + b k + ( τ ( b k + a k + 2. F c F d, F d Fd ( k + 3. end 4. end until b k + a k + < tol Notes: iterate until interval < tol
6 Polynomial Interpolation Methods In an interval of uncertainty the function is approimated by a polynomial and the minimum of the polynomial is used to predict the minimum of the function. Quadratic Interpolation If F ( p 2 + q + r is the interpolating quadratic we need to find the coefficients p, q, and r given the end points of the interval [ ab, ] and c such that a < c< b with F a Fa ( F b Fb (, F c Fc (. The coefficients satisfy the matri equation a 2 a b 2 b c 2 c p q r F a F b F c which can be easily solved analytically. Once we have the coefficients we need to find the minimum of F( and thus d F ( g ( * 2p d * + q 0 and therefore we only need the q and p coefficients. These can be solved using Cramer s rule: * q 2p p F a a -- F b b F c c F a ( b c + F b ( c a + F c ( a b q a 2 F a -- b 2 F b c 2 F c F a ( b 2 c 2 + F b ( a 2 c 2 F c ( a 2 b and therefore * q p -- F a( b2 c 2 + F b ( c 2 a 2 + F c ( a 2 b F a ( b c + F b ( c a + F c ( a b
7 Notice that * must lie in [ ab, ] if F c < F a and F c < F b bracketing interval. Also note that f ( * 2p( * and therefore if p( * > 0 local minimum of quadratic whereas if p( * < 0 local maimum. We now evaluate f( * and compare it to f( c in order to reduce the interval of uncertainty. f( f(* > f c k a k * c k b k a k+ c k+ b k+ f(* > f c k new interval is [a k+, b k+ ] and new c value is c k+ f( f(* < f c k a k * c k b k a k+ c k+ b k+ f(* < f c new interval is [a k+, b k+ ] and new c value is c k+ k
8 Algorithm: Quadratic interpolation search. input a, b, c, tol, ftol 2. set F a Fa (, F b Fb (, F c Fc ( 3. for k,2,... repeat 4. set 5. * -- F a b ( k c k + F b ( c k a k + F c ( a k b k F a ( b k c k + F b ( c k a k + F c ( a k b k F F ( * 6. if * > c k and F < F c then 7. set a k + c k, b k + b k, c k + * 8. F a F c, F c F 9. else if * > c k and F > F c then 0. set a k + a k, b k + *, c k + c k. F b F 2. else if * < c k and F > F c then 3. set a k + *, b k + b k, c k + c k 4. F a F 5. else 6. set a k + a k, b k + c k, c k + * 7. F b F c, F c F 8. end 9. end until ( b k + a k + < tol or ( Fc ( k Fc ( k + Fc ( k < ftol Notes: stop when interval reduced to tol or when relative change in function value < ftol
9 Bounding Phase Question: how do we determine this initial interval? In general we perform a coarse search to bound or bracket the minimum *. This is also called interval location. We will discuss two types of interval location: function comparison (Swann s Bracketing Method; and 2 polynomial etrapolation (Powell s Method. Both of these methods are heuristic in nature but are about the best we can provide in terms of an algorithm. We start with a description of Swann s bracketing method. Function comparison: Swann s Bracketing Method Given an initial step length and starting point a we check the point a distance away on the right side of a. If f( a + < fa ( then we will move to the right. If f( a + > fa ( then we will move to the left (lets say we move to the right. We now magnify the step size by a magnification factor, say 2, and evaluate the function at the point 2 away from the latest point. As soon as any new function evaluation shows an increase we can say that the last two intervals provide a bound or interval of uncertainty for our minimum. F( move to right until function evaluation increase, then last two intervals are interval of uncertainty a b a 2 b 2 a 3 b 3 a 4 b 4 Eample of Swann s bracketing method, magnification factor is 2.0.
10 Algorithm: Swann s Bracketing Method - based on a heuristic epanding pattern.. input: 0, // initial point and step size 2. set: l 0 // lower test point. 3. u 0 + // upper test point. 4. f l f ( l // lower function value 5. f u f ( u // upper function value 6. f 0 f ( 0 // central function value 7. i // epanding eponent 8. if f l f 0 f u // case : move to the right 9. loop-up: i i+ 0. set: l 0, 0 u, f l f 0, f 0 f u. u u + 2 i // shift up in by a magnified amount 2. f u f ( u 3. if f u < f 0 go to loop-up 4. else output ( l, u 5. if f l f 0 f u // case 2: move to the left 6. loop-down: i i+ 7. set: u 0, 0 l, f u f 0, f 0 f l 8. l l 2 i // shift down in by a magnified amount 9. f l f ( l 20. if f l < f 0 go to loop-down 2. else output ( l, u 22. if f l f 0 f u // case 3: initial interval is bracket 23. output ( l, u 24. if f l f 0 f u error: non-unimodal function
11 Polynomial Etrapolation: Powell s Method we use polynomial etrapolate from the initial starting point. Given a starting point a, and two more points apart we etrapolate a polynomial through them (we will use a quadratic function. Now in this case the fitted quadratic may have either a maimum or minimum and this may be determined by evaluating the second derivative. As before if the polynomial is represented as F ( p 2 + q + r then the second derivative is given as F ( G ( 2p and in terms of the three function values F a, F b, F c, evaluated at a, b, and c we can evaluate p as ( c bf p a + ( a cf b + ( b af c. ( b c ( c a ( a b Now how do we use this to find an interval? Starting from three points, a, c, and b, a apart we fit a quadratic to these points, say F(. If we find that this quadratic has a maimum, that is p < 0, then the net point we take is ma away from the smallest function value. (See figure below where for sake of an eample we are assuming that we move to the right. f( case : F( has a maimum, i.e. p < 0 F( ma * a c b a 2 c 2 b 2 Etrapolated polynomial has a maimum therefore use the ma. We discard the point with smallest function value and take new point ma away from b. Discarding the lowest function value gives us more of a chance of fitting a polynomial with a minimum in the net iteration of the algorithm.
12 If the polynomial does have a minimum then we either use the minimum of the polynomial as shown in case 3 or we again use ma if the minimum is farther than ma from b as shown in case 2. f( case 2: F( has a minimum, i.e. p > 0 F( ma a c b a 2 c 2 b 2 * Etrapolated polynomial has a minimum but it is too far away so use ma. f( case 3: F( has a minimum, i.e. p > 0, and * - b < ma F( ma a c b * a 2 c 2 b 2 Etrapolated polynomial has a minimum and it is closer than ma from b. Algorithm: Interval location by Powell s Method. input a,, ma 2. set c a +, F a Fa (, F c Fc ( 3. if Fa > Fc then 4. set b a + 2, F b Fb ( 5. forward true 6. else 7. set b c, c a, a a 8. F b F c, F c F a, F a Fa ( 9. forward false
13 0. end. for K 2,, repeat 2. set p ( c K b K F a + ( a K c K F b + ( b K a K F c ( b k c K ( c K a K ( a K b K 3. if p > 0 then 4. set * -- b ( K c K F a + ( c k a K F b + ( a K b K F c 2( b K c K F a + ( c K a K F b + ( a K b K F c 5. end if 6. if forward then // moving forward 7. if p 0 then // quadratic has a maimum 8. set a K + a K, b K + b K + ma 9. c K + c K, F b Fb ( K else 2. if * b K > ma then // quadratic minimum is too far 22. set b K + b K + ma 23. else // quadratic minimum is O.K. 24. set b K + * 25. end 26. set a K + c K, c K + b K 27. F a F c, F c F b, F b Fb ( K end 29. else // moving backward (forward false 30. if p 0 then // quadratic has a maimum 3. set a K + a K ma, b K + b K 32. c K + c K, F a Fa ( K else 34. if a K * > ma then // quadratic minimum is too far 35. set a K + a K ma 36. else // quadratic minimum is O.K. 37. set a K + * 38. end 39. set b K + c K, c K + a K 40. F b F c, F c F a, F a Fa ( K + 4. end // if 42. end 43. end until F c < F a and F c < F b p 0
Numerical Methods. Root Finding
Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real
More informationHence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.
The Bisection method or BOLZANO s method or Interval halving method: Find the positive root of x 3 x = 1 correct to four decimal places by bisection method Let f x = x 3 x 1 Here f 0 = 1 = ve, f 1 = ve,
More informationNonlinear Equations. Chapter The Bisection Method
Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y
More informationExact and Approximate Numbers:
Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.
More informationChapter 3: Root Finding. September 26, 2005
Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division
More informationRoot Finding (and Optimisation)
Root Finding (and Optimisation) M.Sc. in Mathematical Modelling & Scientific Computing, Practical Numerical Analysis Michaelmas Term 2018, Lecture 4 Root Finding The idea of root finding is simple we want
More informationSolution of Algebric & Transcendental Equations
Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection
More informationCLASS NOTES Computational Methods for Engineering Applications I Spring 2015
CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationNumerical Optimization
Unconstrained Optimization (II) Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Unconstrained Optimization Let f : R R Unconstrained problem min x
More information2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4
2.29 Spring 2015 Lecture 4 Review Lecture 3 Truncation Errors, Taylor Series and Error Analysis Taylor series: 2 3 n n i1 i i i i i n f( ) f( ) f '( ) f ''( ) f '''( )... f ( ) R 2! 3! n! n1 ( n1) Rn f
More informationLecture 8 Optimization
4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional
More informationCHAPTER 4 ROOTS OF EQUATIONS
CHAPTER 4 ROOTS OF EQUATIONS Chapter 3 : TOPIC COVERS (ROOTS OF EQUATIONS) Definition of Root of Equations Bracketing Method Graphical Method Bisection Method False Position Method Open Method One-Point
More informationSection 3.3 Limits Involving Infinity - Asymptotes
76 Section. Limits Involving Infinity - Asymptotes We begin our discussion with analyzing its as increases or decreases without bound. We will then eplore functions that have its at infinity. Let s consider
More informationPage No.1. MTH603-Numerical Analysis_ Muhammad Ishfaq
Page No.1 File Version v1.5.3 Update: (Dated: 3-May-011) This version of file contains: Content of the Course (Done) FAQ updated version.(these must be read once because some very basic definition and
More information3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.
3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)
More informationSOLVING EQUATIONS OF ONE VARIABLE
1 SOLVING EQUATIONS OF ONE VARIABLE ELM1222 Numerical Analysis Some of the contents are adopted from Laurene V. Fausett, Applied Numerical Analysis using MATLAB. Prentice Hall Inc., 1999 2 Today s lecture
More informationby Martin Mendez, UASLP Copyright 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Chapter 5 by Martin Mendez, 1 Roots of Equations Part Why? b m b a + b + c = 0 = aa 4ac But a 5 4 3 + b + c + d + e + f = 0 sin + = 0 =? =? by Martin Mendez, Nonlinear Equation Solvers Bracketing Graphical
More informationToday s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn
Today s class Numerical differentiation Roots of equation Bracketing methods 1 Numerical Differentiation Finite divided difference First forward difference First backward difference Lecture 3 2 Numerical
More informationScientific Computing. Roots of Equations
ECE257 Numerical Methods and Scientific Computing Roots of Equations Today s s class: Roots of Equations Polynomials Polynomials A polynomial is of the form: ( x) = a 0 + a 1 x + a 2 x 2 +L+ a n x n f
More informationSolving Non-Linear Equations (Root Finding)
Solving Non-Linear Equations (Root Finding) Root finding Methods What are root finding methods? Methods for determining a solution of an equation. Essentially finding a root of a function, that is, a zero
More information(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)
Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not
More informationUNCONSTRAINED OPTIMIZATION
UNCONSTRAINED OPTIMIZATION 6. MATHEMATICAL BASIS Given a function f : R n R, and x R n such that f(x ) < f(x) for all x R n then x is called a minimizer of f and f(x ) is the minimum(value) of f. We wish
More informationToday. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods
Optimization Last time Root inding: deinition, motivation Algorithms: Bisection, alse position, secant, Newton-Raphson Convergence & tradeos Eample applications o Newton s method Root inding in > 1 dimension
More informationReview. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn
Review Taylor Series and Error Analysis Roots of Equations Linear Algebraic Equations Optimization Numerical Differentiation and Integration Ordinary Differential Equations Partial Differential Equations
More informationMA3232 Numerical Analysis Week 9. James Cooley (1926-)
MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier
More informationSolutions to Math 41 Final Exam December 9, 2013
Solutions to Math 4 Final Eam December 9,. points In each part below, use the method of your choice, but show the steps in your computations. a Find f if: f = arctane csc 5 + log 5 points Using the Chain
More informationNumerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point
Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error In this method we assume initial value of x, and substitute in the equation. Then modify x and continue till we
More informationChapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence
Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we
More informationECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues. Prof. Peter Bermel January 23, 2013
ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues Prof. Peter Bermel January 23, 2013 Outline Recap from Friday Optimization Methods Brent s Method Golden Section Search
More informationThe stationary points will be the solutions of quadratic equation x
Calculus 1 171 Review In Problems (1) (4) consider the function f ( ) ( ) e. 1. Find the critical (stationary) points; establish their character (relative minimum, relative maimum, or neither); find intervals
More informationBasic ALGEBRA 2 SUMMER PACKET
Name Basic ALGEBRA SUMMER PACKET This packet contains Algebra I topics that you have learned before and should be familiar with coming into Algebra II. We will use these concepts on a regular basis throughout
More informationNumerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods
Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods So welcome to the next lecture of the 2 nd unit of this
More informationLecture 7: Minimization or maximization of functions (Recipes Chapter 10)
Lecture 7: Minimization or maximization of functions (Recipes Chapter 10) Actively studied subject for several reasons: Commonly encountered problem: e.g. Hamilton s and Lagrange s principles, economics
More informationIntro to Scientific Computing: How long does it take to find a needle in a haystack?
Intro to Scientific Computing: How long does it take to find a needle in a haystack? Dr. David M. Goulet Intro Binary Sorting Suppose that you have a detector that can tell you if a needle is in a haystack,
More informationSOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD
BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative
More informationChapter 4. Solution of a Single Nonlinear Algebraic Equation
Single Nonlinear Algebraic Equation - 56 Chapter 4. Solution of a Single Nonlinear Algebraic Equation 4.1. Introduction Life, my fris, is nonlinear. As such, in our roles as problem-solvers, we will be
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More informationAPPLICATIONS OF DIFFERENTIATION
4 APPLICATIONS OF DIFFERENTIATION APPLICATIONS OF DIFFERENTIATION 4.8 Newton s Method In this section, we will learn: How to solve high degree equations using Newton s method. INTRODUCTION Suppose that
More informationReview of Classical Optimization
Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization
More informationBracketing an Optima in Univariate Optimization
Bracketing an Optima in Univariate Optimization Pritibhushan Sinha Quantitative Methods & Operations Management Area Indian Institute of Management Kozhikode Kozhikode 673570 Kerala India Email: pritibhushan.sinha@iimk.ac.in
More informationOutline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations
Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign
More informationFinding Roots of Equations
Finding Roots of Equations Solution Methods Overview Bisection/Half-interval Search Method of false position/regula Falsi Secant Method Newton Raphson Iteration Method Many more. Open Methods Bracketing
More information4.7. Newton s Method. Procedure for Newton s Method HISTORICAL BIOGRAPHY
4. Newton s Method 99 4. Newton s Method HISTORICAL BIOGRAPHY Niels Henrik Abel (18 189) One of the basic problems of mathematics is solving equations. Using the quadratic root formula, we know how to
More informationLinear Algebraic Equations
Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n
More informationA summary of factoring methods
Roberto s Notes on Prerequisites for Calculus Chapter 1: Algebra Section 1 A summary of factoring methods What you need to know already: Basic algebra notation and facts. What you can learn here: What
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationFigure 1: Graph of y = x cos(x)
Chapter The Solution of Nonlinear Equations f(x) = 0 In this chapter we will study methods for find the solutions of functions of single variables, ie values of x such that f(x) = 0 For example, f(x) =
More informationNumerical optimization
THE UNIVERSITY OF WESTERN ONTARIO LONDON ONTARIO Paul Klein Office: SSC 408 Phone: 661-111 ext. 857 Email: paul.klein@uwo.ca URL: www.ssc.uwo.ca/economics/faculty/klein/ Numerical optimization In these
More informationPractical and Efficient Evaluation of Inverse Functions
J. C. HAYEN ORMATYC 017 TEXT PAGE A-1 Practical and Efficient Evaluation of Inverse Functions Jeffrey C. Hayen Oregon Institute of Technology (Jeffrey.Hayen@oit.edu) ORMATYC 017 J. C. HAYEN ORMATYC 017
More informationOrder of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n
Week 3 Jack Carl Kiefer (94-98) Jack Kiefer was an American statistician. Much of his research was on the optimal design of eperiments. However, he also made significant contributions to other areas of
More information6.1 The function can be set up for fixed-point iteration by solving it for x
1 CHAPTER 6 6.1 The function can be set up for fied-point iteration by solving it for 1 sin i i Using an initial guess of 0 = 0.5, the first iteration yields 1 sin 0.5 0.649637 a 0.649637 0.5 100% 3% 0.649637
More informationNUMERICAL METHODS FOR SOLVING EQUATIONS
Mathematics Revision Guides Numerical Methods for Solving Equations Page of M.K. HOME TUITION Mathematics Revision Guides Level: AS / A Level AQA : C3 Edecel: C3 OCR: C3 NUMERICAL METHODS FOR SOLVING EQUATIONS
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational
More informationNumerical Methods Lecture 3
Numerical Methods Lecture 3 Nonlinear Equations by Pavel Ludvík Introduction Definition (Root or zero of a function) A root (or a zero) of a function f is a solution of an equation f (x) = 0. We learn
More information1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that
Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a
More informationis the intuition: the derivative tells us the change in output y (from f(b)) in response to a change of input x at x = b.
Uses of differentials to estimate errors. Recall the derivative notation df d is the intuition: the derivative tells us the change in output y (from f(b)) in response to a change of input at = b. Eamples.
More informationInformed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from
Informed Search Day 3 of Search Chap. 4, Russel & Norvig Material in part from http://www.cs.cmu.edu/~awm/tutorials Uninformed Search Complexity N = Total number of states B = Average number of successors
More informationUNIT II SOLUTION OF NON-LINEAR AND SIMULTANEOUS LINEAR EQUATION
UNIT II SOLUTION OF NON-LINEAR AND SIMULTANEOUS LINEAR EQUATION. If g x is continuous in a, b, then under what condition the iterative method x = g x has a unique solution in a, b? g x < in a, b. State
More informationTopic 8c Multi Variable Optimization
Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (915) 747 6958 E Mail: rcrumpf@utep.edu Topic 8c Multi Variable Optimization EE 4386/5301 Computational Methods in EE Outline Mathematical Preliminaries
More information1-D Optimization. Lab 16. Overview of Line Search Algorithms. Derivative versus Derivative-Free Methods
Lab 16 1-D Optimization Lab Objective: Many high-dimensional optimization algorithms rely on onedimensional optimization methods. In this lab, we implement four line search algorithms for optimizing scalar-valued
More informationLecture 7. Root finding I. 1 Introduction. 2 Graphical solution
1 Introduction Lecture 7 Root finding I For our present purposes, root finding is the process of finding a real value of x which solves the equation f (x)=0. Since the equation g x =h x can be rewritten
More informationNUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)
NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR) Autumn Session UNIT 1 Numerical analysis is the study of algorithms that uses, creates and implements algorithms for obtaining numerical solutions to problems
More informationR x n. 2 R We simplify this algebraically, obtaining 2x n x n 1 x n x n
Math 42 Homework 4. page 3, #9 This is a modification of the bisection method. Write a MATLAB function similar to bisect.m. Here, given the points P a a,f a and P b b,f b with f a f b,we compute the point
More informationIterated Functions. Tom Davis August 2, 2017
Iterated Functions Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles August 2, 2017 Abstract In this article we will examine various properties of iterated functions. If f(x) is a function,
More informationSection - 9 GRAPHS. (a) y f x (b) y f x. (c) y f x (d) y f x. (e) y f x (f) y f x k. (g) y f x k (h) y kf x. (i) y f kx. [a] y f x to y f x
44 Section - 9 GRAPHS In this section, we will discuss graphs and graph-plotting in more detail. Detailed graph plotting also requires a knowledge of derivatives. Here, we will be discussing some general
More informationOptimization. Totally not complete this is...don't use it yet...
Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign
More informationTopic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number
Topic Contents Factoring Methods Unit 3 The smallest divisor of an integer The GCD of two numbers Generating prime numbers Computing prime factors of an integer Generating pseudo random numbers Raising
More informationModule 2, Section 2 Solving Equations
Principles of Mathematics Section, Introduction 03 Introduction Module, Section Solving Equations In this section, you will learn to solve quadratic equations graphically, by factoring, and by applying
More informationTaylor Series and Numerical Approximations
Taylor Series and Numerical Approximations Hilary Weller h.weller@reading.ac.uk August 7, 05 An introduction to the concept of a Taylor series and how these are used in numerical analysis to find numerical
More informationRoot Finding and Optimization
Root Finding and Optimization Ramses van Zon SciNet, University o Toronto Scientiic Computing Lecture 11 February 11, 2014 Root Finding It is not uncommon in scientiic computing to want solve an equation
More informationChapter 1. Root Finding Methods. 1.1 Bisection method
Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This
More informationRoot Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014
Root Finding: Close Methods Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Roots Given function f(x), we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the
More informationBACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES
No. of Printed Pages : 5 BCS-054 BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 058b9 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES Time : 3 hours Maximum Marks
More informationSTOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1
53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This
More informationVariable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University
Newton s Iterative s Peter W. White white@tarleton.edu Department of Mathematics Tarleton State University Fall 2018 / Numerical Analysis Overview Newton s Iterative s Newton s Iterative s Newton s Iterative
More informationFurther factorising, simplifying, completing the square and algebraic proof
Further factorising, simplifying, completing the square and algebraic proof 8 CHAPTER 8. Further factorising Quadratic epressions of the form b c were factorised in Section 8. by finding two numbers whose
More information6.5 Trigonometric Equations
6. Trigonometric Equations In this section, we discuss conditional trigonometric equations, that is, equations involving trigonometric functions that are satisfied only by some values of the variable (or
More informationUNIT-II INTERPOLATION & APPROXIMATION
UNIT-II INTERPOLATION & APPROXIMATION LAGRANGE POLYNAMIAL 1. Find the polynomial by using Lagrange s formula and hence find for : 0 1 2 5 : 2 3 12 147 : 0 1 2 5 : 0 3 12 147 Lagrange s interpolation formula,
More information32. Method of Lagrange Multipliers
32. Method of Lagrange Multipliers The Method of Lagrange Multipliers is a generalized approach to solving constrained optimization problems. Assume that we are seeking to optimize a function z = f(, y)
More informationNo EFFICIENT LINE SEARCHING FOR CONVEX FUNCTIONS. By E. den Boef, D. den Hertog. May 2004 ISSN
No. 4 5 EFFICIENT LINE SEARCHING FOR CONVEX FUNCTIONS y E. den oef, D. den Hertog May 4 ISSN 94-785 Efficient Line Searching for Convex Functions Edgar den oef Dick den Hertog 3 Philips Research Laboratories,
More informationJustify all your answers and write down all important steps. Unsupported answers will be disregarded.
Numerical Analysis FMN011 10058 The exam lasts 4 hours and has 13 questions. A minimum of 35 points out of the total 70 are required to get a passing grade. These points will be added to those you obtained
More informationInformed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS
Informed Search Chap. 4 Material in part from http://www.cs.cmu.edu/~awm/tutorials Uninformed Search Complexity N = Total number of states B = Average number of successors (branching factor) L = Length
More informationExample 1: What do you know about the graph of the function
Section 1.5 Analyzing of Functions In this section, we ll look briefly at four types of functions: polynomial functions, rational functions, eponential functions and logarithmic functions. Eample 1: What
More informationCE 601: Numerical Methods Lecture 7. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.
CE 60: Numerical Methods Lecture 7 Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati. Drawback in Elimination Methods There are various drawbacks
More informationH-A2T THE INTEGERS UNIT 1 POLYNOMIALS AND THE NUMBER LINE (DAY 1)
H-AT THE INTEGERS UNIT POLYNOMIALS AND THE NUMBER LINE (DAY ) Warm-Up: Find the solution set of the following inequality: 0 5 < 40 INEQUALITY SOLUTION SETS Solve the inequality equation Graph the solution
More informationMathematical Methods for Numerical Analysis and Optimization
Biyani's Think Tank Concept based notes Mathematical Methods for Numerical Analysis and Optimization (MCA) Varsha Gupta Poonam Fatehpuria M.Sc. (Maths) Lecturer Deptt. of Information Technology Biyani
More informationComputational Methods. Solving Equations
Computational Methods Solving Equations Manfred Huber 2010 1 Solving Equations Solving scalar equations is an elemental task that arises in a wide range of applications Corresponds to finding parameters
More informationNumerical Solution of f(x) = 0
Numerical Solution of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@pdx.edu ME 350: Finding roots of f(x) = 0 Overview Topics covered in these slides
More information9 - Markov processes and Burt & Allison 1963 AGEC
This document was generated at 8:37 PM on Saturday, March 17, 2018 Copyright 2018 Richard T. Woodward 9 - Markov processes and Burt & Allison 1963 AGEC 642-2018 I. What is a Markov Chain? A Markov chain
More informationMTH603 FAQ + Short Questions Answers.
Absolute Error : Accuracy : The absolute error is used to denote the actual value of a quantity less it s rounded value if x and x* are respectively the rounded and actual values of a quantity, then absolute
More informationTo find the absolute extrema on a continuous function f defined over a closed interval,
Question 4: How do you find the absolute etrema of a function? The absolute etrema of a function is the highest or lowest point over which a function is defined. In general, a function may or may not have
More information8.4 Integration of Rational Functions by Partial Fractions Lets use the following example as motivation: Ex: Consider I = x+5
Math 2-08 Rahman Week6 8.4 Integration of Rational Functions by Partial Fractions Lets use the following eample as motivation: E: Consider I = +5 2 + 2 d. Solution: Notice we can easily factor the denominator
More informationBisection and False Position Dr. Marco A. Arocha Aug, 2014
Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Given function f, we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the function f Problem is known as root
More informationPART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435
PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu
More informationCS325: Analysis of Algorithms, Fall Final Exam
CS: Analysis of Algorithms, Fall 0 Final Exam I don t know policy: you may write I don t know and nothing else to answer a question and receive percent of the total points for that problem whereas a completely
More informationBasic methods to solve equations
Roberto s Notes on Prerequisites for Calculus Chapter 1: Algebra Section 1 Basic methods to solve equations What you need to know already: How to factor an algebraic epression. What you can learn here:
More informationSIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)
19 SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS The REDUCED GRADIENT algorithm and its variants such as the CONVEX SIMPLEX METHOD (CSM) and the GENERALIZED REDUCED GRADIENT (GRG) algorithm are approximation
More information