Chapter Newton s Method

Similar documents
Newton s Method for One - Dimensional Optimization - Theory

Single Variable Optimization

Summary with Examples for Root finding Methods -Bisection -Newton Raphson -Secant

CHAPTER 4d. ROOTS OF EQUATIONS

Review of Taylor Series. Read Section 1.2

Math1110 (Spring 2009) Prelim 3 - Solutions

: Numerical Analysis Topic 2: Solution of Nonlinear Equations Lectures 5-11:

EEE 241: Linear Systems

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Generalized Linear Methods

Chapter 4: Root Finding

Lecture Notes on Linear Regression

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

ACTM State Calculus Competition Saturday April 30, 2011

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Topic 5: Non-Linear Regression

Linear Approximation with Regularization and Moving Least Squares

Lecture 2 Solution of Nonlinear Equations ( Root Finding Problems )

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

ECE559VV Project Report

CHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG

Singular Value Decomposition: Theory and Applications

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Kernel Methods and SVMs Extension

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Indeterminate pin-jointed frames (trusses)

CISE301: Numerical Methods Topic 2: Solution of Nonlinear Equations

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

14 Lagrange Multipliers

Consistency & Convergence

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Chapter - 2. Distribution System Power Flow Analysis

Problem Set 9 Solutions

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Nice plotting of proteins II

Curve Fitting with the Least Square Method

2. PROBLEM STATEMENT AND SOLUTION STRATEGIES. L q. Suppose that we have a structure with known geometry (b, h, and L) and material properties (EA).

Complex Numbers, Signals, and Circuits

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Lecture 10 Support Vector Machines II

First Law: A body at rest remains at rest, a body in motion continues to move at constant velocity, unless acted upon by an external force.

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

A SEPARABLE APPROXIMATION DYNAMIC PROGRAMMING ALGORITHM FOR ECONOMIC DISPATCH WITH TRANSMISSION LOSSES. Pierre HANSEN, Nenad MLADENOVI]

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to

Assortment Optimization under MNL

Calculation of time complexity (3%)

Errors for Linear Systems

Implicit Integration Henyey Method

One-sided finite-difference approximations suitable for use with Richardson extrapolation

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Report on Image warping

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Global Sensitivity. Tuesday 20 th February, 2018

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Numerical Solution of Ordinary Differential Equations

PHYS 705: Classical Mechanics. Calculus of Variations II

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

6.1 The function can be formulated as a fixed-point iteration as

CSE 546 Midterm Exam, Fall 2014(with Solution)

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Polynomial Regression Models

10.34 Fall 2015 Metropolis Monte Carlo Algorithm

The Minimum Universal Cost Flow in an Infeasible Flow Network

ELE B7 Power Systems Engineering. Power Flow- Introduction

Section 8.3 Polar Form of Complex Numbers

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

ME 501A Seminar in Engineering Analysis Page 1

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

Numerical Transient Heat Conduction Experiment

The Fundamental Theorem of Algebra. Objective To use the Fundamental Theorem of Algebra to solve polynomial equations with complex solutions

APPENDIX A Some Linear Algebra

Classification as a Regression Problem

IV. Performance Optimization

The Basic Idea of EM

An Interactive Optimisation Tool for Allocation Problems

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

The Geometry of Logit and Probit

8.6 The Complex Number System

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Maximal Margin Classifier

The Study of Teaching-learning-based Optimization Algorithm

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

1 Derivation of Point-to-Plane Minimization

Lecture 21: Numerical methods for pricing American type derivatives

Natural Language Processing and Information Retrieval

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

Quadratic speedup for unstructured search - Grover s Al-

VQ widely used in coding speech, image, and video

Transcription:

Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve one-dmensonal optmzaton problems usng Newton s method How s the Newton s method dfferent from the Golden Secton Search method? The Golden Secton Search method requres explctly ndcatng lower and upper boundares for the search regon n whch the optmal soluton les. Such methods where the boundares need to be specfed are known as bracketng approaches n the sense that the optmal soluton s bracketed by these boundares. Newton s method s an open (nstead of bracketng approach, where the optmum of the one-dmensonal functon f ( x s found usng an ntal guess of the optmal value wthout the need for specfyng lower and upper boundary values for the search regon. Unlke the bracketng approaches, open approaches are not guaranteed to converge. However, f they do converge, they do so much faster than bracketed approaches. Therefore, open approaches are more useful f there s reasonable evdence that the ntal guess s close to the optmal value. Otherwse, f there s doubt about the qualty of the ntal guess, t s advsable to use bracketng approaches to brng the guess closer to the optmal value and then use an open approach beneftng from the advantages presented by both technques. What s the Newton s method and how does t work? Newton s method s an open approach to fnd the mnmum or the maxmum of a functon f ( x. It s very smlar to the Newton-Raphson method http://numercalmethods.eng.usf.edu/topcs/newton_raphson.html to fnd the roots of a functon such that f ( x =. Snce the dervatve of the functon f ( x, f ( x = at the functons maxmum and mnmum, the mnma and the maxma can be found by applyng the Newton-Raphson method to the dervatve, essentally obtanng 9..

9.. Chapter 9. f ( x x+ = x ( f ( x We cauton that before usng Newton s method to determne the mnmum or the maxmum of a functon, one should have a reasonably good estmate of the soluton to ensure convergence, and that the functon should be easly twce dfferentable. Dervaton of the Newton-Raphson Equaton Slope at pont + C + We wsh that n the next teraton + wll be the root, or F ( + =. Thus: Slope at pont C = + or

Newton s Method 9..3 Hence : F ( + = = + F ( Remarks:. If F ( f (,then + = f ( f (. For Mult-varable case, then NR method becomes = [ f ( ] f ( + Step by step use of Newton s method The followng algorthm mplements Newton s method to determne the maxmum or f x. mnmum of a functon ( Intalzaton Determne a reasonably good estmate x for the maxma or the mnma of the functon f ( x. Step Determne f ( x and f ( x. Step Substtute x +, the ntal estmate x for the frst teraton, f ( x and f ( x nto Eqn. to determne x and the functon value n teraton. Step 3 If the value of the frst dervatve of the functon s zero, then you have reached the optmum (maxma or mnma, otherwse repeat Step wth the new value of x untl the absolute relatve approxmate error s less than the pre-specfed tolerance. Example Consder Fgure below. The cross-sectonal area A of a gutter wth equal base and edge length of s gven by A = 4sn ( + cos Fnd the angle whch maxmzes the cross-sectonal area of the gutter.

9..4 Chapter 9. Fgure : Cross secton of the gutter Soluton The functon to be maxmzed s f ( = 4sn ( + cos. The frst and second dervatve of the functon s shown below. f ( = 4(cos + cos sn f ( = 4sn ( + 4cos Let us use = π / 4 as the ntal estmate of. Usng Eqn. (, we can calculate the frst teraton follows: = f ( = f ( π f π 4 = 4 π f 4 π π π 4(cos + cos sn π = 4 4 4 4 π π 4sn ( + 4cos 4 4 =.466 The functon s evaluated at the frst estmate as f (.466 = 5. 96. The next teraton uses =.466 as the best estmate of. Usng Eqn( agan, the second teraton s calculated as follows: =

Newton s Method 9..5 f = f =.466 ( ( f (.466 f (.466 4(cos.466 + cos.466 sn.466 =.466 4sn.466( + 4 cos.466 =.47 The teratons wll contnue untl the soluton converges to a sngle optmal soluton. Summary results of all the teratons are shown n Table. Several mportant observatons regardng the 5th teraton can be made. At each teraton, the magntude of the frst dervatve gets smaller and approaches zero. A value of zero of the frst dervatve tells us that we have reached the optmal and we can stop. Also note that the sgn of the second dervatve s negatve whch tells us that we are at a maxmum. Ths value would have been postve f we had reached a mnmum. The soluton tells us that the optmal angle s.47. Remember that the actual soluton to the problem s at 6 degrees or.47 radans. See Example n Golden Search Method http://numercalmethods.eng.usf.edu/topcs/opt_goldensearch.html. Table. Summary of teratons for Example Iteraton f ( f ( + f ( +.7854.884 -.88.466 5.96.466.6898 -.396.47 5.96 3.47.63E-6 -.39.47 5.96 4.47 3.64E-4 -.39.47 5.96 5.47.333E-5 -.39.47 5.96 OPTIMIZATION Topc Newton s Method Summary Textbook notes for the Newton s method Major All engneerng majors Authors Al Yalcn Date August 7, Web Ste http://numercalmethods.eng.usf.edu