Applied Computational Economics Workshop. Part 3: Nonlinear Equations

Similar documents
Solving Nonlinear Equations

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

Nonlinear equations can take one of two forms. In the nonlinear root nding n n

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Key Concepts: Economic Computation, Part III

Numerical Methods I Solving Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Scientific Computing: An Introductory Survey

CS 323: Numerical Analysis and Computing

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

5 Quasi-Newton Methods

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Line Search Methods. Shefali Kulkarni-Thaker

Solving non-linear systems of equations

Numerical Methods in Informatics

Numerical solutions of nonlinear systems of equations

Lecture 7: Numerical Tools

CS 323: Numerical Analysis and Computing

Numerical Analysis Fall. Roots: Open Methods

Lecture V. Numerical Optimization

Math 411 Preliminaries

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics

Lecture 8. Root finding II

ECS550NFB Introduction to Numerical Methods using Matlab Day 2

Chapter 1. Root Finding Methods. 1.1 Bisection method

Unconstrained Multivariate Optimization

17 Solution of Nonlinear Systems

Applied Mathematics 205. Unit I: Data Fitting. Lecturer: Dr. David Knezevic

Math 409/509 (Spring 2011)

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Advanced Microeconomics

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

Line Search Algorithms

1 The best of all possible worlds

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Chapter 3: Root Finding. September 26, 2005

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Lecture 5. September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico

Non-polynomial Least-squares fitting

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Goals for This Lecture:

Solution of Nonlinear Equations

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Maria Cameron. f(x) = 1 n

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Unconstrained optimization

CHEE 222: PROCESS DYNAMICS AND NUMERICAL METHODS

Simple Iteration, cont d

Tvestlanka Karagyozova University of Connecticut

October 16, 2018 Notes on Cournot. 1. Teaching Cournot Equilibrium

Optimization. Totally not complete this is...don't use it yet...

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 6: Monday, Mar 7. e k+1 = 1 f (ξ k ) 2 f (x k ) e2 k.

Scientific Computing: Optimization

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

Math 551 Homework Assignment 3 Page 1 of 6

Quadratic function and equations Quadratic function/equations, supply, demand, market equilibrium

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Chapter 4 AD AS. O. Afonso, P. B. Vasconcelos. Computational Economics: a concise introduction

Numerical Analysis: Solving Nonlinear Equations

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Investigating Limits in MATLAB

Numerical Solution of f(x) = 0

Oligopoly Theory 2 Bertrand Market Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

Quasi-Newton Methods

Fixed Point Theorems

Chapter 4. Unconstrained optimization

Lecture 15. Dynamic Stochastic General Equilibrium Model. Randall Romero Aguilar, PhD I Semestre 2017 Last updated: July 3, 2017

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations

Review for Exam 2 Ben Wang and Mark Styczynski

Optimization II: Unconstrained Multivariable

Solving Dynamic Games with Newton s Method

Numerical Methods Lecture 3

INTRODUCTION TO NUMERICAL ANALYSIS

DISCRETE-TIME DYNAMICS OF AN

446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method

Math 4329: Numerical Analysis Chapter 03: Fixed Point Iteration and Ill behaving problems. Natasha S. Sharma, PhD

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

CHAPTER 10 Zeros of Functions

1 Functions and Graphs

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Nonlinear Optimization for Optimal Control

Statistics 580 Optimization Methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

4 damped (modified) Newton methods

Nonlinear dynamics in the Cournot duopoly game with heterogeneous players

2.4 - Convergence of the Newton Method and Modified Newton Method

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Game theory and market power

Transcription:

Applied Computational Economics Workshop Part 3: Nonlinear Equations 1

Overview Introduction Function iteration Newton s method Quasi-Newton methods Practical example Practical issues 2

Introduction Nonlinear equations take one of two forms Root-finding problem Given a function : R R, compute an -vector, called a root of, such that Fixed-point problem ( )=0 Given a function : R R, compute an -vector, called a fixed-point of, such that ( )= 3

Introduction The two forms of nonlinear equations are equivalent A root of is a fixed-point of A fixed-point of is root of 4

Introduction Nonlinear equations arise naturally in economics multi-commodity market equilibrium models multi-person static game models unconstrained optimization models Nonlinear equations also arise indirectly when numerically solving economic models involving functional equations dynamic optimization models rational expectations models arbitrage pricing models 5

Function Iteration Function iteration is an algorithm for computing a fixed-point of a function Guess an initial value and successively form the iterates until the iterates converge 6

Computing Fixed-Point of Iteration Using Function 7

Function Iteration: Example To compute the fixed-point of ( ) = employs the iteration rule +0.2using function iteration, one = ( ) = +0.2 In MATLAB x = 0.4; for it=1:50 xold = x x = sqrt(x+0.2); if abs(x-xold)<1.e-10, break, end end After 28 iterations, x converges to 1.1708 8

Function Iteration When is function iteration guaranteed to converge? By the Contraction Mapping Theorem, if for some, for all and, then possesses an unique fixed-point and function iteration will converge to it from any initial value 9

Newton s Method Newton s method is an algorithm for computing a root of a function Guess an initial value and successively form the iterates until the iterates converge 10

Newton s Method: Example To compute the root of ( ) = 2using Newton s method, one employs the iteration rule = = 2 4 In MATLAB x = 2.3; for it=1:50 s = -(x^4-2)/(4*x^3); x = x + s; if abs(s)<1.e-10, break, end end After 8 iterations, converges to 1.1892 11

Newton s Method Newton s method employs a strategy of successive linearization The strategy calls for the nonlinear function to be approximated by a sequence of linear functions whose roots are easily computed and, ideally, converge to the root of In particular, the +1 iterate = ( ) is the root of the Taylor linear approximation of around the preceding iterate, viz. ( )+ ( )( ) 12

Computing Root of Method Using Newton s 13

Newton s Method When is Newton s method guaranteed to converge? In theory, Newton s method will converge if the initial value is close to a root of at which is non-singular Theory, however, provides no practical definition of close Moreover, in practice, Newton s method will fail to converge to if is ill-conditioned, i.e. nearly singular 14

Quasi-Newton Methods Quasi-Newton methods replace the Jacobian in Newton s method with an estimate that is easier to compute Specifically, quasi-newton methods use an iteration rule where is an estimate of the Jacobian 15

Secant Method The quasi-newton method for univariate root-finding problems is called the secant method The secant method replaces the derivative in Newton s method with the estimate which leads to the iteration rule, = ( ) The secant method is so called because it approximates the function using secant lines drawn through successive pairs of points on its graph 16

Computing Root of Using Secant Method 17

Broyden s Method Broyden s method is the most popular multivariate generalization of the univariate secant method Broyden s method replaces the Jacobian in Newton s method with an estimate that is updated by making the smallest possible change that is consistent with the secant condition This yields a complicated iteration rule that is omitted here, but which may found in many textbooks, including Miranda and Fackler (2002) 18

Quasi-Newton Methods When are quasi-newton methods guaranteed to converge? In theory, a quasi-newton (e.g. Broyden s) method will converge if the initial value is close to a root of at which the Jacobian is non-singular and if the initial Jacobian estimate is close to Theory, however, provides no practical definition of close Moreover, in practice, Broyden s method will fail to converge to if the Jacobian estimates become illconditioned, i.e. nearly singular 19

Numerical Examples The CompEcon Toolbox provides two utilities for computing the root of function Utility newton uses Newton s method Utility broyden uses Broyden s method 20

Numerical Examples The calling protocol for newton is Input [x,fval] = newton(f,x,varargin) f x varargin Output x fval function of form [fval,fjac]=f(x,varargin) where fval and fjac are the function value and the Jacobian value at x, respectively initial guess for a root of f additional arguments for f (optional) a root of f value of f at x 21

Numerical Examples The calling protocol for broyden is Input f x [x,fval] = broyden(f,x,varargin) function of form fval=f(x,varargin) where fval is the function value at x initial guess for a root of f varargin additional arguments for f (optional) Output x fval a root of f value of f at x 22

Numerical Examples Let us compute the root of a function : R R given by ( ) = exp( ) 2 To use broyden, first compose a MATLAB function function fval = f(x) fval = [x(2)*exp(x(1))-2*x(2);x(1)*x(2)-x(2)^3]; and save it as f.m Then, on the MATLAB command screen, execute x = [1.0;0.5]; optset('broyden','showiters',1); [x,fval] = broyden(@f,x) After 11 iterations, this produces x=(0.6931,0.8326) 23

Numerical Examples To use newton, edit f.m so that it computes the analytic Jacobian, viz. function [fval,fjac] = f(x) fval = [x(2)*exp(x(1))-2*x(2);x(1)*x(2)-x(2)^3]; fjac = [x(2)*exp(x(1)) exp(x(1)) 2; x(2) x(1)-3*x(2)^2]; On the MATLAB command screen, execute x = [1.0;0.5]; checkjac(@f,x) This checks the internal consistency of your function file by comparing the analytic derivative to a numerical derivative 24

Numerical Examples Then, on the MATLAB command screen, execute x = [1.0;0.5]; optset('newton','showiters',1); [x,fval] = newton(@f,x) After 5 iterations, this produces x=(0.6931,0.8326) 25

Convergence Path for Newton and Broyden Methods 26

Numerical Examples (Cournot Duopoly) Consider a market with two firms producing the same good Firm s total cost of production is a function of the quantity it produces. The market clearing price is a function of the total quantity produced by both firms 27

Numerical Examples (Cournot Duopoly) Firm chooses production so as to maximize its profit taking the other firm s output as given Thus, in equilibrium, for 28

Numerical Examples (Cournot Duopoly) Suppose,, and To compute the equilibrium using Broyden s method, open the file exampcournot.m and uncomment alpha = 0.6; beta = [0.6 0.8]; P = @(q)(q(1)+q(2))^(-alpha); Pder = @(q)(-alpha)*(q(1)+q(2))^(-alpha-1); f = @(q)[p(q)+pder(q)*q(1)-beta(1)*q(1); P(q)+Pder(q)*q(2)-beta(2)*q(2)]; Here, f computes the marginal profits of both firms 29

Numerical Examples (Cournot Duopoly) Then type the following and execute q = [0.2;0.2]; optset('broyden','showiters',1); q = broyden(f,q) After 10 iterations, converges to 30

Practical Issues Failure to converge Execution speed Choosing a solution method 31

Failure to Converge In practice, nonlinear equation algorithms can fail to converge for various reasons human error bad initial value ill-conditioning 32

Failure to Converge: Human Error Math errors: analyst incorrectly derives function or Jacobian Coding errors: analyst incorrectly codes function or Jacobian Coding errors are less likely with function iteration and Broyden s method because they are derivative-free 33

Failure to Converge: Bad Initial Value Nonlinear equation algorithms require initial values If initial value is far from desired root, algorithm can diverge or converge to a wrong root Theory provides no guidance on how to specify initial value Analyst must supply good guess from knowledge of model If algorithm diverges, try another initial value Well behaved functions are more robust to initial value Poorly behaved functions are more sensitive to initial value 34

Failure to Converge: Ill-Conditioning Computing iteration step in Newton s and Broyden s methods requires solution to a linear equation involving the Jacobian or its estimate If the Jacobian or estimate is ill-conditioned near solution, the iteration step cannot be accurately computed Very little can be done about this It arises more often than we like 35

Execution Speed Two factors determine the speed with which a properly coded and initiated algorithm will converge to a solution asymptotic rate of convergence computational effort per iteration 36

Execution Speed The number of iterations required for a properly coded and initiated algorithm to converge is closely tied to its theoretical asymptotic rate of convergence Function iteration converges at a linear rate relatively slow Broyden s method converges at a superlinear rate relatively fast Newton s method converges at a quadratic rate extremely fast 37

Rate of Convergence When Computing Fixed-Point of Using Various Methods, 38

Execution Speed However, algorithms differ in computations per iteration function iteration requires a function evaluation Broyden s method additionally requires a linear solve Newton s method additionally requires a Jacobian evaluation Thus, a faster rate of convergence typically can be achieved only by investing greater computational effort per iteration The optimal tradeoff between rate of convergence and computational effort per iteration varies across applications 39

Choosing a Solution Method Concerns about execution speed, however, are exaggerated The time that must be invested by the analyst to write and debug code typically is far more important Derivative-free methods such as function iteration and Broyden s method can be implemented faster in real time and more reliably than Newton s method Newton s method should be used only if dimension is low or derivatives are simple other methods have failed to converge, or general purpose, re-usable code is needed 40