Applied Computational Economics Workshop Part 3: Nonlinear Equations 1
Overview Introduction Function iteration Newton s method Quasi-Newton methods Practical example Practical issues 2
Introduction Nonlinear equations take one of two forms Root-finding problem Given a function : R R, compute an -vector, called a root of, such that Fixed-point problem ( )=0 Given a function : R R, compute an -vector, called a fixed-point of, such that ( )= 3
Introduction The two forms of nonlinear equations are equivalent A root of is a fixed-point of A fixed-point of is root of 4
Introduction Nonlinear equations arise naturally in economics multi-commodity market equilibrium models multi-person static game models unconstrained optimization models Nonlinear equations also arise indirectly when numerically solving economic models involving functional equations dynamic optimization models rational expectations models arbitrage pricing models 5
Function Iteration Function iteration is an algorithm for computing a fixed-point of a function Guess an initial value and successively form the iterates until the iterates converge 6
Computing Fixed-Point of Iteration Using Function 7
Function Iteration: Example To compute the fixed-point of ( ) = employs the iteration rule +0.2using function iteration, one = ( ) = +0.2 In MATLAB x = 0.4; for it=1:50 xold = x x = sqrt(x+0.2); if abs(x-xold)<1.e-10, break, end end After 28 iterations, x converges to 1.1708 8
Function Iteration When is function iteration guaranteed to converge? By the Contraction Mapping Theorem, if for some, for all and, then possesses an unique fixed-point and function iteration will converge to it from any initial value 9
Newton s Method Newton s method is an algorithm for computing a root of a function Guess an initial value and successively form the iterates until the iterates converge 10
Newton s Method: Example To compute the root of ( ) = 2using Newton s method, one employs the iteration rule = = 2 4 In MATLAB x = 2.3; for it=1:50 s = -(x^4-2)/(4*x^3); x = x + s; if abs(s)<1.e-10, break, end end After 8 iterations, converges to 1.1892 11
Newton s Method Newton s method employs a strategy of successive linearization The strategy calls for the nonlinear function to be approximated by a sequence of linear functions whose roots are easily computed and, ideally, converge to the root of In particular, the +1 iterate = ( ) is the root of the Taylor linear approximation of around the preceding iterate, viz. ( )+ ( )( ) 12
Computing Root of Method Using Newton s 13
Newton s Method When is Newton s method guaranteed to converge? In theory, Newton s method will converge if the initial value is close to a root of at which is non-singular Theory, however, provides no practical definition of close Moreover, in practice, Newton s method will fail to converge to if is ill-conditioned, i.e. nearly singular 14
Quasi-Newton Methods Quasi-Newton methods replace the Jacobian in Newton s method with an estimate that is easier to compute Specifically, quasi-newton methods use an iteration rule where is an estimate of the Jacobian 15
Secant Method The quasi-newton method for univariate root-finding problems is called the secant method The secant method replaces the derivative in Newton s method with the estimate which leads to the iteration rule, = ( ) The secant method is so called because it approximates the function using secant lines drawn through successive pairs of points on its graph 16
Computing Root of Using Secant Method 17
Broyden s Method Broyden s method is the most popular multivariate generalization of the univariate secant method Broyden s method replaces the Jacobian in Newton s method with an estimate that is updated by making the smallest possible change that is consistent with the secant condition This yields a complicated iteration rule that is omitted here, but which may found in many textbooks, including Miranda and Fackler (2002) 18
Quasi-Newton Methods When are quasi-newton methods guaranteed to converge? In theory, a quasi-newton (e.g. Broyden s) method will converge if the initial value is close to a root of at which the Jacobian is non-singular and if the initial Jacobian estimate is close to Theory, however, provides no practical definition of close Moreover, in practice, Broyden s method will fail to converge to if the Jacobian estimates become illconditioned, i.e. nearly singular 19
Numerical Examples The CompEcon Toolbox provides two utilities for computing the root of function Utility newton uses Newton s method Utility broyden uses Broyden s method 20
Numerical Examples The calling protocol for newton is Input [x,fval] = newton(f,x,varargin) f x varargin Output x fval function of form [fval,fjac]=f(x,varargin) where fval and fjac are the function value and the Jacobian value at x, respectively initial guess for a root of f additional arguments for f (optional) a root of f value of f at x 21
Numerical Examples The calling protocol for broyden is Input f x [x,fval] = broyden(f,x,varargin) function of form fval=f(x,varargin) where fval is the function value at x initial guess for a root of f varargin additional arguments for f (optional) Output x fval a root of f value of f at x 22
Numerical Examples Let us compute the root of a function : R R given by ( ) = exp( ) 2 To use broyden, first compose a MATLAB function function fval = f(x) fval = [x(2)*exp(x(1))-2*x(2);x(1)*x(2)-x(2)^3]; and save it as f.m Then, on the MATLAB command screen, execute x = [1.0;0.5]; optset('broyden','showiters',1); [x,fval] = broyden(@f,x) After 11 iterations, this produces x=(0.6931,0.8326) 23
Numerical Examples To use newton, edit f.m so that it computes the analytic Jacobian, viz. function [fval,fjac] = f(x) fval = [x(2)*exp(x(1))-2*x(2);x(1)*x(2)-x(2)^3]; fjac = [x(2)*exp(x(1)) exp(x(1)) 2; x(2) x(1)-3*x(2)^2]; On the MATLAB command screen, execute x = [1.0;0.5]; checkjac(@f,x) This checks the internal consistency of your function file by comparing the analytic derivative to a numerical derivative 24
Numerical Examples Then, on the MATLAB command screen, execute x = [1.0;0.5]; optset('newton','showiters',1); [x,fval] = newton(@f,x) After 5 iterations, this produces x=(0.6931,0.8326) 25
Convergence Path for Newton and Broyden Methods 26
Numerical Examples (Cournot Duopoly) Consider a market with two firms producing the same good Firm s total cost of production is a function of the quantity it produces. The market clearing price is a function of the total quantity produced by both firms 27
Numerical Examples (Cournot Duopoly) Firm chooses production so as to maximize its profit taking the other firm s output as given Thus, in equilibrium, for 28
Numerical Examples (Cournot Duopoly) Suppose,, and To compute the equilibrium using Broyden s method, open the file exampcournot.m and uncomment alpha = 0.6; beta = [0.6 0.8]; P = @(q)(q(1)+q(2))^(-alpha); Pder = @(q)(-alpha)*(q(1)+q(2))^(-alpha-1); f = @(q)[p(q)+pder(q)*q(1)-beta(1)*q(1); P(q)+Pder(q)*q(2)-beta(2)*q(2)]; Here, f computes the marginal profits of both firms 29
Numerical Examples (Cournot Duopoly) Then type the following and execute q = [0.2;0.2]; optset('broyden','showiters',1); q = broyden(f,q) After 10 iterations, converges to 30
Practical Issues Failure to converge Execution speed Choosing a solution method 31
Failure to Converge In practice, nonlinear equation algorithms can fail to converge for various reasons human error bad initial value ill-conditioning 32
Failure to Converge: Human Error Math errors: analyst incorrectly derives function or Jacobian Coding errors: analyst incorrectly codes function or Jacobian Coding errors are less likely with function iteration and Broyden s method because they are derivative-free 33
Failure to Converge: Bad Initial Value Nonlinear equation algorithms require initial values If initial value is far from desired root, algorithm can diverge or converge to a wrong root Theory provides no guidance on how to specify initial value Analyst must supply good guess from knowledge of model If algorithm diverges, try another initial value Well behaved functions are more robust to initial value Poorly behaved functions are more sensitive to initial value 34
Failure to Converge: Ill-Conditioning Computing iteration step in Newton s and Broyden s methods requires solution to a linear equation involving the Jacobian or its estimate If the Jacobian or estimate is ill-conditioned near solution, the iteration step cannot be accurately computed Very little can be done about this It arises more often than we like 35
Execution Speed Two factors determine the speed with which a properly coded and initiated algorithm will converge to a solution asymptotic rate of convergence computational effort per iteration 36
Execution Speed The number of iterations required for a properly coded and initiated algorithm to converge is closely tied to its theoretical asymptotic rate of convergence Function iteration converges at a linear rate relatively slow Broyden s method converges at a superlinear rate relatively fast Newton s method converges at a quadratic rate extremely fast 37
Rate of Convergence When Computing Fixed-Point of Using Various Methods, 38
Execution Speed However, algorithms differ in computations per iteration function iteration requires a function evaluation Broyden s method additionally requires a linear solve Newton s method additionally requires a Jacobian evaluation Thus, a faster rate of convergence typically can be achieved only by investing greater computational effort per iteration The optimal tradeoff between rate of convergence and computational effort per iteration varies across applications 39
Choosing a Solution Method Concerns about execution speed, however, are exaggerated The time that must be invested by the analyst to write and debug code typically is far more important Derivative-free methods such as function iteration and Broyden s method can be implemented faster in real time and more reliably than Newton s method Newton s method should be used only if dimension is low or derivatives are simple other methods have failed to converge, or general purpose, re-usable code is needed 40