Numerical Dynamic Programming

Size: px
Start display at page:

Download "Numerical Dynamic Programming"

Transcription

1 Numerical Dynamic Programming Kenneth L. Judd Hoover Institution Prepared for ICE05 July 20, lectures 1

2 Discrete-Time Dynamic Programming Objective: X: set of states E ( TX D: the set of controls t=1 π(x t,u t,t)+w(x T +1 ) ), (12.1.1) π(x, u, t) payoffs inperiodt, forx X at the beginning of period t, and control u D is applied in period t. D(x, t) D: controls which are feasible in state x at time t. F (A; x, u, t) : probability that x t+1 A X conditional on time t control and state Value function ( TX V (x, t) sup E U(x,t) s=t π(x s,u s,s)+w(x T +1 ) x t = x ). (12.1.2) Bellman equation V (x, t) = sup u D(x,t) π(x, u, t)+e {V (x t+1,t+1) x t = x, u t = u} (12.1.3) Existence: boundedness of π is sufficient 2

3 Autonomous, InÞnite-Horizon Problem: Objective: X: set of states D: the set of controls max u t ( ) X E β t π(x t,u t ) t=1 D(x) D: controls which are feasible in state x. (12.1.1) π(x, u) payoff in period t if x X at the beginning of period t, and control u D is applied in period t. F (A; x, u) : probability that x + A X conditional on current control u and current state x. Value function deþnition: if U(x) is set of all feasible strategies starting at x. ( ) X V (x) sup E β t π(x t,u t ) U(x) x0 = x, (12.1.8) t=0 3

4 Bellman equation for V (x) V (x) = sup u D(x) π(x, u)+βe V (x + ) x, u ª (TV)(x), (12.1.9) Optimal policy function, U(x), if it exists, is deþned by U(x) arg max π(x, u)+βe V (x + ) x, u ª u D(x) Standard existence theorem: Theorem 1 If X is compact, β<1, andπ isboundedaboveandbelow, then the map TV = sup u D(x) π(x, u)+βe V (x + ) x, u ª ( ) is monotone in V, is a contraction mapping with modulus β in the space of bounded functions, and has a unique Þxed point. 4

5 Deterministic Growth Example Problem: Euler equation: V (k 0 )=max ct P t=0 βt u(c t ), k t+1 = F (k t ) c t k 0 given u 0 (c t )=βu 0 (c t+1 )F 0 (k t+1 ) ( ) Bellman equation V (k) =max c u(c) +βv (F(k) c). ( ) Solution to ( ) is a policy function C(k) and a value function V (k) satisfying 0=u 0 (C(k))F 0 (k) V 0 (k) ( ) V (k)=u(c(k)) + βv (F(k) C(k)) ( ) ( ) deþnes the value of an arbitrary policy function C(k), not just for the optimal C(k). The pair ( ) and ( ) expresses the value function given a policy, and a Þrst-order condition for optimality. 5

6 Stochastic Growth Accumulation Problem: ( X ) V (k, θ) =max c t,7 t E t=0 β t u(c t ) k t+1 = F(k t,θ t ) c t θ t+1 = g(θ t,ε t ) ε t : i.i.d. random variable k 0 = k, θ 0 = θ. State variables: k: productive capital stock, endogenous θ: productivity state, exogenous The dynamic programming formulation is V (k, θ) =max c u(c)+βe{v (F (k, θ) c, θ + ) θ} ( ) θ + = g(θ, ε) The control law c = C(k, θ) satisþes the Þrst-order conditions 0=u c (C(k, θ)) βe{u c (C(k +,θ + ))F k (k +,θ + ) θ}, ( ) where k + F (k, L(k, θ),θ) C(k, θ), 6

7 Objectives of this Lecture Formulate dynamic programming problems in computationally useful ways Describe key algorithms Value function iteration Policy iteration Gauss-Seidel methods Linear programming approach Describe approaches to continuous-state problems Point to key numerical steps Approximation of value functions Numerical integration methods Maximization methods Describe how to combine alternative numerical techniques and algorithms 7

8 Discrete State Space Problems State space X = {x i,i=1,,n} Controls D = {u i i =1,..., m} qij(u) t =Pr(x t+1 = x j x t = x i,u t = u) Q t (u) = qij(u) t : Markov transition matrix at t if u i,j t = u. Value Function iteration Terminal value: V T +1 i = W(x i ),i=1,,n. Bellman equation: time t value function is nx Vi t =max[π(x i,u,t)+β q t u ij(u) Vj t+1 ], i=1,,n j=1 which deþnes value function iteration Value function iteration is only choice for Þnite-horizon problems InÞnite-horizon problems Bellman equation is a set of equations for V i values: nx V i =max π(x i,u)+β q ij (u) V j,i=1,,n u j=1 Value function iteration is now V k+1 i =max u π(x i,u)+β nx j=1 q ij (u) V k j,i=1,,n 8

9 Can use value function iteration with arbitrary Vi 0 Error is given by contraction mapping property: V k V 1 1 β V k+1 V k and iterate k. Algorithm 12.1: Value Function Iteration Algorithm Objective: Solve the Bellman equation, (12.3.4). Step 0: Make initial guess V 0 ; choose stopping criterion A>0. Step 1: For i =1,..., n, compute Vi 7+1 =max u D π(x i,u)+β P n j=1 q ij(u)vj 7. Step 2: If k V 7+1 V 7 k<a,thengotostep3;elsegotostep1. Step 3: Compute the Þnal solution, setting U = UV 7+1, Pi = π(x i,ui ), i =1,,n, V =(I βq U ) 1 P, and STOP. Output: 9

10 Policy Iteration (a.k.a. Howard improvement) Value function iteration is a slow process Linear convergence at rate β Convergence is particularly slow if β is close to 1. Policy iteration is faster Current guess: V k i,i=1,,n. Iteration: compute optimal policy today if V k is value tomorrow: nx Ui k+1 =argmax π(x i,u)+β q ij (u) Vj k,i=1,,n, u Compute the value function if the policy U k+1 is used forever, which is solution to the linear system Vi k+1 = π nx x i,ui k+1 + β q ij (Ui k+1 ) Vj k+1,i=1,,n, j=1 Comments: j=1 Policy iteration depends on only monotonicity Policy iteration is faster than value function iteration If initial guess is above or below solution then policy iteration is between truth and value function iterate Works well even for β close to 1. 10

11 Gaussian acceleration methods for inþnite-horizon models Key observation: Bellman equation is a simultaneous set of equations nx V i =max π(x i,u)+β q ij (u) V j,i=1,,n u Idea: Treat problem as a large system of nonlinear equations Value function iteration is the pre-gauss-jacobi iteration nx Vi k+1 =max π(x i,u)+β q ij (u) Vj k,i=1,,n u True Gauss-Jacobi is V k+1 i =max u pre-gauss-seidel iteration j=1 j=1 " π(xi,u)+β P j6=i q # ij(u) Vj k,i=1,,n 1 βq ii (u) Value function iteration is a pre-gauss-jacobi scheme. Gauss-Seidel alternatives use new information immediately Suppose we have Vi 7 At each x i,givenvj 7+1 for j<i, compute Vi 7+1 in a pre-gauss-seidel fashion V 7+1 i =max u π(x i,u)+β X j<i q ij (u)v 7+1 j +β X j i q ij (u)v 7 j (12.4.7) Iterate (12.4.7) for i =1,.., n 11

12 Gauss-Seidel iteration Suppose we have V 7 i If optimal control at state i is u, then Gauss-Seidel iterate would be V 7+1 i = π(x i,u)+β Pj<i q ij(u)vj P j>i q ij(u)vj 7 1 βq ii (u) Gauss-Seidel: At each x i,givenvj 7+1 for j<i, compute Vi 7+1 V 7+1 i =max u π(x i,u)+β P j<i q ij(u)vj β P j>i q ij(u)vj 7 1 βq ii (u) Iterate this for i =1,.., n Gauss-Seidel iteration: better notation No reason to keep track of 7, number of iterations At each x i, V i max u π(x i,u)+β P j<i q ij(u)v j + β P j>i q ij(u)v j 1 βq ij (u) Iterate this for i =1,.., n, 1,..., etc. 12

13 Upwind Gauss-Seidel Gauss-Seidel methods in (12.4.7) and (12.4.8) Sensitive to ordering of the states. Need to Þnd good ordering schemes to enhance convergence. Example: Two states, x 1 and x 2, and two controls, u 1 and u 2 u i causes state to move to x i, i =1, 2 Payoffs: π(x 1,u 1 )= 1, π(x 1,u 2 )=0, π(x 2,u 1 )= 0,π(x 2,u 2 )=1. β =0.9. (12.4.9) Solution: Optimal policy: always choose u 2,movingtox 2 Value function: V (x 1 )=9,V(x 2 )=10. x 2 is the unique steady state, and is stable Value iteration with V 0 (x 1 )=V 0 (x 2 ) = 0 converges linearly: V 1 (x 1 )=0,V 1 (x 2 )=1,U 1 (x 1 )=2,U 1 (x 2 )=2, V 2 (x 1 )=0.9, V 2 (x 2 )=1.9, U 2 (x 1 )=2,U 2 (x 2 )=2, V 3 (x 1 )=1.71, V 3 (x 2 )=2.71, U 3 (x 1 )=2,U 3 (x 2 )=2, Policy iteration converges after two iterations V 1 (x 1 )=0,V 1 (x 2 )=1,U 1 (x 1 )=2,U 1 (x 2 )=2, V 2 (x 1 )=9,V 2 (x 2 )=10,U 2 (x 1 )=2,U 2 (x 2 )=2, 13

14 Upwind Gauss-Seidel Value function at absorbing states is trivial to compute Suppose s is absorbing state with control u V (s) =π(s, u)/(1 β). With absorbing state V (s) wecomputev (s 0 )ofanys 0 that sends system to s. V (s 0 )=π (s 0,u)+βV (s) With V (s 0 ), we can compute values of states s 00 that send system to s 0 ;etc. 14

15 Alternating Sweep It may be difficult to Þnd proper order. Idea: alternate between two approaches with different directions. W = V k, W i =max u π(x i,u)+β P n j=1 q ij(u)w j,i=1, 2, 3,..., n W i =max u π(x i,u)+β P n j=1 q ij(u)w j,i= n, n 1,..., 1 V k+1 = W Will always work well in one-dimensional problems since state moves eitherrightorleft,andalternating sweep will exploit this half of the time. In two dimensions, there may still be a natural ordering to be exploited. Simulated Upwind Gauss-Seidel It may be difficult to Þnd proper order in higher dimensions Idea: simulate using latest policy function to Þnd downwind direction Simulate to get an example path, x 1,x 2,x 3,x 4,..., x m Execute Gauss-Seidel with states x m,x m 1,x m 2,..., x 1 15

16 Linear Programming Approach If D is Þnite, we can reformulate dynamic programming as a linear programming problem. (12.3.4) is equivalent to the linear program P min n Vi i=1 V i s.t. V i π(x i,u)+β P n j=1 q ij(u)v j, i, u D, ( ) Computational considerations ( ) may be a large problem Trick and Zin (1997) pursued an acceleration approach with success. OR literature did not favor this approach, but recent work by Daniela Pucci de Farias and Ben van Roy has revived interest. 16

17 Continuous states: discretization Method: Replace continuous X with a Þnite X = {x i,i=1,,n} X Proceed with a Þnite-state method. Problems: Sometimes need to alter space of controls to assure landing on an x in X. A Þne discretization often necessary to get accurate approximations 17

18 Continuous States: Linear-Quadratic Dynamic Programming Problem: max u t TX µ 1 β t 2 x> t Q t x t + u > t R t x t u> t S t u t x> T +1W T +1 x T +1 t=0 (12.6.1) x t+1 = A t x t + B t u t, Bellman equation: 1 V (x, t) =max u t 2 x> Q t x + u > t R t x u> t S t u t + βv (A t x + B t u t,t+1). (12.6.2) Finite horizon Key fact: We know solution is quadratic, solve for the unknown coefficients The guess V (x, t) = 1 2 x> W t+1 x implies f.o.c. 0=S t u t + R t x + βb t > W t+1 (A t x + B t u t ), F.o.c. implies the time t control law u t = (S t + βb t > W t+1 B t ) 1 (R t + βb t > W t+1 A t )x (12.6.3) U t x. Substitution into Bellman implies Riccati equation for W t : W t = Q t + βa > t W t+1 A t +(βb t > W t+1 A t + R t > )U t. (12.6.4) Value function method iterates (12.6.4) beginning with known W T +1 matrix of coefficients. 18

19 Autonomous, InÞnite-horizon case. Assume R t = R, Q t = Q, S t = S, A t = A, andb t = B The guess V (x) 1 2 x> Wx implies the algebraic Riccati equation Two convergent procedures: Value function iteration: W =Q + βa > WA (βb > WA+ R > ) (12.6.5) (S + βb > WB) 1 (βb > WB + R > ). W 0 : a negative deþnite initial guess W k+1 =Q + βa > W k A (βb > W k A + R > ) (S + βb > W k B) 1 (βb > W k B + R > ). (12.6.6) Policy function iteration: W 0 :initialguess U i+1 = (S + βb > W i B) 1 (R + βb > W i A) : optimal policy for W i 1 2 W i+1 = Q U i+1su > i+1 + U i+1r > : value of U i 1 β 19

20 Lessons We used a functional form to solve the dynamic programming problem We solve for unknown coefficients We did not restrict either the state or control set Can we do this in general? Continuous Methods for Continuous-State Problems Basic Bellman equation: V (x) = max u D(x) π(u, x)+βe{v (x+ ) x, u)} (TV)(x). (12.7.1) Discretization essentially approximates V with a step function Approximation theory provides better methods to approximate continuous functions. General Task Find good approximation for V Identify parameters 20

21 Parametric Approach: Approximating V (x) Choose a Þnite-dimensional parameterization and a Þnite number of states V (x). = ˆV (x; a), a R m (12.7.2) X = {x 1,x 2,,x n }, (12.7.3) polynomials with coefficients a and collocation points X splines with coefficients a with uniform nodes X rational function with parameters a and nodes X neural network specially designed functional forms Objective: Þnd coefficients a R m such that ˆV (x; a) approximately satisþes the Bellman equation. Data for approximating V (x) Conventional methods just generate data on V (x j ): Z v j = max j)+β u D(x j ) ˆV (x + ; a)df (x + x j,u) (12.7.5) Envelope theorem: If solution u is interior, vj 0 = π x (u, x j )+β Z ˆV (x + ; a)df x (x + x j,u) If solution u is on boundary Z vj 0 = µ + π x (u, x j )+β ˆV (x + ; a)df x (x + x j,u) where µ is a Kuhn-Tucker multiplier Since computing vj 0 is cheap, we should include it in data We review approximation methods. 21

22 Approximation Methods General Objective: Given data about a function f(x) (which is difficult to compute) construct a simpler function g(x) that approximates f(x). Questions: What data should be produced and used? What family of simpler functions should be used? What notion of approximation do we use? How good can the approximation be? How simple can a good approximation be? Comparisons with statistical regression Both approximate an unknown function Both use a Þnite amount of data Statistical data is noisy; we assume here that data errors are small Nature produces data for statistical analysis; we produce the data in function approximation Our approximation methods are like experimental design with very small experimental error 22

23 Types of Approximation Methods Interpolation Approach: Þnd a function from an n-dimensional family of functions which exactly Þts n data items Lagrange polynomial interpolation Data: (x i,y i ),i=1,.., n. Objective: Find a polynomial of degree n 1, p n (x), which agrees with the data, i.e., y i = f(x i ),i=1,.., n Result: If the x i are distinct, there is a unique interpolating polynomial Hermite polynomial interpolation Data: (x i,y i,y 0 i),i=1,.., n. Objective: Find a polynomial of degree 2n 1, p(x), which agrees with the data, i.e., y i =p(x i ),i=1,.., n yi 0 =p 0 (x i ),i=1,.., n Result: If the x i are distinct, there is a unique interpolating polynomial Least squares approximation Data: A function, f(x). Objective: Find a function g(x) fromaclassg that best approximates f(x), i.e., g =argmaxkf gk2 g G 23

24 Chebyshev polynomials - Example of orthogonal polynomials [a, b] =[ 1, 1] w(x) = 1 x 2 1/2 T n (x) =cos(n cos 1 x) Recurrence formula: T 0 (x)=1 T 1 (x)=x T n+1 (x)=2xt n (x) T n 1 (x), Figure 1: 24

25 General intervals Few problems have the speciþc intervals used in deþnition of Chebshev polynomials Map compact interval [a, b] to[ 1, 1] by y = 1+2 x a b a then φ i 1+2 x a b a are the Chebyshev polynomials adapted to [a, b] Regression Data: (x i,y i ),i=1,.., n. Objective: Find β R m., m n, with y i = f(xi ; β),i=1,.., n. Least Squares regression: Chebyshev Regression Chebyshev Regression Data: X min (yi f (x i ; β)) 2 β R m (x i,y i ),i=1,.., n > m, x i are the n zeroes of T n (x) adapted to [a, b] Chebyshev Interpolation Data: (x i,y i ),i=1,.., n = m, x i are the n zeroes of T n (x)adapted to [a, b] 25

26 Minmax Approximation Data: (x i,y i ),i=1,.., n. Objective: L Þt min max β R m i Problem: Difficult to compute ky i f (x i ; β)k Chebyshevminmaxproperty Theorem 2 Suppose f :[ 1, 1] R is C k for some k 1, andleti n be the degree n polynomial interpolation of f based at the zeroes of T n (x). Then k f I n k µ 2 π Chebyshev interpolation: converges in L log(n +1)+1 (n k)! n! essentially achieves minmax approximation easy to compute does not approximate f 0 ³ µ π k b a k k f (k) k

27 Splines DeÞnition 3 Afunctions(x) on [a, b] is a spline of order n iff 1. s is C n 2 on [a, b], and 2. there is a grid of points (called nodes) a = x 0 <x 1 < <x m = b such that s(x) is a polynomial of degree n 1 on each subinterval [x i,x i+1 ], i =0,...,m 1. Note: an order 2 spline is the piecewise linear interpolant. Cubic Splines Lagrange data set: {(x i,y i ) i =0,,n}. Nodes: The x i are the nodes of the spline Functional form: s(x) =a i + b i x + c i x 2 + d i x 3 on [x i 1,x i ] Unknowns: 4n unknown coefficients, a i,b i,c i,d i,i=1, n. 27

28 Conditions: 2n interpolation and continuity conditions: y i =a i + b i x i + c i x 2 i + d i x 3 i, i =1,.,n y i =a i+1 + b i+1 x i + c i+1 x 2 i + d i+1 x 3 i, i =0,.,n 1 2n 2 conditions from C 2 at the interior: for i =1, n 1, b i +2c i x i +3d i x 2 i =b i+1 +2c i+1 x i +3d i+1 x 2 i 2c i +6d i x i =2c i+1 +6d i+1 x i Equations (1 4) are 4n 2 linear equations in 4n unknown parameters, a, b, c, andd. construct 2 side conditions: natural spline: s 00 (x 0 )=0=s 00 (x n ); it minimizes total curvature, R xn x 0 s 00 (x) 2 dx, among solutions to (1-4). Hermite spline: s 0 (x 0 )=y0 0 and s 0 (x n )=yn 0 (assumes extra data) Secant Hermite spline: s 0 (x 0 ) = (s(x 1 ) s(x 0 ))/(x 1 x 0 )and s 0 (x n )=(s(x n ) s(x n 1 ))/(x n x n 1 ). not-a-knot: choose j = i 1,i 2, such that i 1 +1<i 2, and set d j = d j+1. Solve system by special (sparse) methods; see spline Þt packages 28

29 B-Splines: A basis for splines Put knots at {x k,,x 1,x 0,,x n }. Order 1 splines: step function interpolation spanned by 0, x < x i, Bi 0 (x) = 1, x i x<x i+1, 0, x i+1 x, Order 2 splines: piecewise linear interpolation and are spanned by 0, x x i or x x i+2, Bi 1 x x (x) = i x i+1 x i, x i x x i+1, x i+2 x x i+2 x i+1, x i+1 x x i+2. The B 1 i -spline is the tent function with peak at x i+1 and is zero for x x i and x x i+2. Both B 0 and B 1 splines form cardinal bases for interpolation at the x i s. Higher-order B-splines are deþned by the recursive relation µ x Bi k xi (x)= Bi k 1 (x) x i+k x µ i xi+k+1 x + Bi+1 k 1 x i+k+1 x (x) i+1 29

30 Shape-preservation Concave (monotone) data may lead to nonconcave (nonmonotone) approximations. Example Figure 2: 30

31 Schumaker Procedure: 1. Take level (and maybe slope) data at nodes x i 2. Add intermediate nodes z + i [x i,x i+1 ] 3. Create quadratic spline with nodes at the x and z nodes to interpolate data and preserves shape. Many other procedures exist for one-dimensional problems Few procedures exist for two-dimensional problems Higher dimensions are difficult, but many questions are open. Spline summary: Evaluation is cheap Splines are locally low-order polynomial. Can choose intervals so that Þnding which [x i,x i+1 ]containsaspeciþc x is easy. Good Þts even for functions with discontinuous or large higher-order derivatives. Can use splines to preserve shape conditions 31

32 Multidimensional approximation methods Lagrange Interpolation Data: D {(x i,z i )} N i=1 R n+m,wherex i R n and z i R m Objective: Þnd f : R n R m such that z i = f(x i ). Task: Find combinations of interpolation nodes and spanning functions to produce a nonsingular (well-conditioned) interpolation matrix. Tensor products If A and B are sets of functions over x R n, y R m, their tensor product is A B = {ϕ(x)ψ(y) ϕ A, ψ B}. Given a basis for functions of x i, Φ i = {ϕ i k (x i)} k=0, the n-fold tensor product basis for functions of (x 1,x 2,...,x n )is ( ny ) Φ = ϕ i k i (x i ) k i =0, 1,,i=1,...,n i=1 Multidimensional Splines B-splines: Multidimensional versions of splines can be constructed through tensor products; here B-splines would be useful. Summary Tensor products directly extend one-dimensional methods to n dimensions Curse of dimensionality often makes tensor products impractical 32

33 Complete polynomials In general, the complete set of polynomials of total degree k in n variables. nx Pk n {x i 1 1 x i n n i 7 k, 0 i 1,,i n } Sizes of alternative bases 7=1 degree k Pk n Tensor Prod. 2 1+n + n(n +1)/2 3 n 3 1+n + n(n+1) 2 + n 2 + n(n 1)(n 2) 6 4 n Complete polynomial bases contains fewer elements than tensor products. Asymptotically, complete polynomial bases are as good as tensor products. For smooth n-dimensional functions, complete polynomials are more efficient approximations Construction Compute tensor product approximation, as in Algorithm 6.4 Drop terms not in complete polynomial basis (or, just compute coefficients for polynomials in complete basis). Complete polynomial version is faster to compute since it involves fewer terms 33

34 Parametric Approach: Approximating T - Expectations For each x j,(tv)(x j )isdeþned by v j =(TV)(x j )= max u D(x j ) π(u, x j)+β The deþnition includes an expectation Z E{V (x + ; a) x j,u)} = How do we approximate the integral? Integration Most integrals cannot be evaluated analytically Z ˆV (x + ; a)df (x + x j,u) (12.7.5) ˆV (x + ; a)df (x + x j,u) We examine various integration formulas that can be used. Newton-Cotes Formulas Trapezoid Rule: piecewise linear approximation nodes: x j = a +(j 1 2 )h, j =1, 2,...,n, h =(b a)/n for some ξ [a, b] Z b a f(x) dx= h 2 [f 0 +2f f n 1 + f n ] h2 (b a) f 00 (ξ) 12 Simpson s Rule: piecewise quadratic approximation Z b a f(x) dx= h 3 [f 0 +4f 1 +2f 2 +4f f n 1 + f n ] h4 (b a) f (4) (ξ) 180 Obscure rules for degree 3, 4, etc. approximations. 34

35 Gaussian Formulas All integration formulas are of form Z b a f(x) dx. = nx i=1 ω i f(x i ) (7.2.1) for some quadrature nodes x i [a, b] andquadrature weights ω i. Newton-Cotes use arbitrary x i Gaussian quadrature uses good choices of x i nodes and ω i weights. Exact quadrature formulas: Let F k be the space of degree k polynomials A quadrature formula is exact of degree k if it correctly integrates each function in F k Gaussian quadrature formulas use n points and are exact of degree 2n 1 Gauss-Chebyshev Quadrature Domain: [ 1, 1] Weight: (1 x 2 ) 1/2 Formula: Z 1 1 f(x)(1 x 2 ) 1/2 dx = π n nx f(x i )+ i=1 π f (2n) (ξ) 2 2n 1 (2n)! (7.2.4) for some ξ [ 1, 1], with quadrature nodes µ 2i 1 x i = cos 2n π, i =1,..., n. (7.2.5) 35

36 Arbitrary Domains Want to approximate R b a f(x) dx Different range, no weight function Linear change of variables x = 1+2(y a)(b a) Multiply the integrand by (1 x 2 ) 1/2 ± (1 x 2 ) 1/2. C.O.V. formula Z b a a f(y) dy = b a 2 Z 1 1 f i=1 µ (x +1)(b a) 2 1 x 2 1/2 + a dx (1 x 2 1/2 ) Gauss-Chebyshev quadrature produces Z b f(y) dy =. π(b a) nx µ (xi +1)(b a) 1 f + a x 2 i 2n 2 where the x i are Gauss-Chebyshev nodes over [ 1, 1]. 1/2 36

37 Gauss-Hermite Quadrature Domain: [, ]] Weight: e x2 Formula: Z for some ξ (, ). f(x)e x2 dx = nx i=1 ω i f(x i )+ n! π f (2n) (ξ) 2 n (2n)! Example formulas Table 7.4: Gauss Hermite Quadrature N x i ω i (1) (1) (1) ( 1) (1) ( 3) (1) ( 1)

38 Normal Random Variables Y is distributed N(µ, σ 2 ) Expectation is integration: E{f(Y )} =(2πσ 2 ) 1/2 Z Use Gauss-Hermite quadrature linear COV x =(y µ)/ 2 σ COV formula: Z f(y)e (y µ)2 /(2σ 2) dy = COV quadrature formula: E{f(Y )}. = π 1 2 Z f(y)e (y µ)2 2σ 2 dy f( 2 σx+ µ)e x2 2 σdx nx ω i f( 2 σx i + µ) i=1 where the ω i and x i are the Gauss-Hermite quadrature weights and nodes over [, ]. 38

39 Multidimensional Integration Many dynamic programming problems have several dimensions Multidimensional integrals are much more difficult Simple methods suffer from curse of dimensionality There are methods which avoid curse of dimensionality Product Rules Build product rules from one-dimension rules Let x 7 i,ω 7 i, i = 1,,m, be one-dimensional quadrature points and weights in dimension 7 from a Newton-Cotes rule or the Gauss-Legendre rule. The product rule Z [ 1,1] d f(x)dx. = mx mx i 1 =1 i d =1 ω 1 i 1 ω 2 i 2 ω d i d f(x 1 i 1,x 2 i 2,,x d i d ) Curse of dimensionality: m d functional evaluations is m d for a d-dimensional problem with m points in each direction. Problem worse for Newton-Cotes rules which are less accurate in R 1. 39

40 Monomial Formulas: A Nonproduct Approach Method Choose x i D R d,i=1,..., N Choose ω i R, i=1,..., N Quadrature formula Z D f(x) dx. = NX i=1 ω i f(x i ) (7.5.3) A monomial formula is complete for degree 7 if NX Z ω i p(x i )= p(x) dx (7.5.3) i=1 for all polynomials p(x) of total degree 7; recall that P 7 was deþned in chapter 6 to be the set of such polynomials. For the case 7 = 2, this implies the equations D P N i=1 ω i = R P D 1 dx N i=1 ω ix i j = R D x j dx, j =1,,d P N i=1 ω ix i jx i k = R D x jx k dx, j, k =1,,d (7.5.4) 1+d + 1 2d(d +1)equations N weights ω i and the N nodes x i each with d components, yielding a total of (d +1)N unknowns. 40

41 Simple examples Let e j (0,...,1,...,0) where the 1 appears in column j. 2d points and exactly integrates all elements of P 3 over [ 1, 1] d Z [ 1,1] d f. =ω u= dx f(ue i )+f( ue i ) i=1 µ 1/2 d,ω= 2d 1 3 d For P 5 the following scheme works: R [ 1,1] d f =ω. P 1 f(0) + ω d 2 i=1 f(ue i )+f( ue i ) +ω 3 P1 i<d, f(u(e i ± e j )) + f( u(e i ± e j )) i<j d where ω 1 =2 d (25 d d + 162), ω 2 =2 d (70 25d) ω 3 = d, u =( 3 5 )1/2. 41

42 Parametric Approach: Approximating T - Maximization For each x j,(tv)(x j )isdeþned by v j =(TV)(x j )= max u D(x j ) π(u, x j)+β Z ˆV (x + ; a)df (x + x j,u) (12.7.5) In practice, we compute the approximation ˆT using some integration method. v j =(ˆTV)(x j ). =(TV)(x j ) E{V (x + ; a) x j,u)}. = X 7 ω 7 ˆV (g(x j,u,ε 7 ); a) We now come to the maximization step: for x i X, evaluate v i =(T ˆV )(x i ) Use appropriate optimization method, depending on the smoothness of the maximand Hot starts Concave stopping rules When we have computed the v i (and perhaps v 0 i) we execute the Þtting step: Data: (v i,v 0 i,x i), i=1,,n Objective: Þnd an a R m such that ˆV (x; a) bestþts the data Methods: determined by ˆV (x; a) 42

43 Parametric Approach: Value Function Iteration Comparison with discretization guess a ˆV (x; a) (v i,x i ),i=1,,n new a This procedure examines only a Þnite number of points, but does not assume that future points lie in same Þnite set. Our choices for the x i are guided by systematic numerical considerations. Synergies Smooth interpolation schemes allow us to use Newton s method in the maximization step. They also make it easier to evaluate the integral in (12.7.5). Finite-horizon problems Value function iteration is only possible procedure since V (x, t) depends on time t. Begin with terminal value function, V (x, T ) Compute approximations for each V (x, t), t = T 1,T 2, etc. 43

44 Algorithm 12.5: Parametric Dynamic Programming with Value Function Iteration Objective: Solve the Bellman equation, (12.7.1). Step 0: Choose functional form for ˆV (x; a), and choose the approximation grid, X = {x 1,..., x n }. Make initial guess ˆV (x; a 0 ), and choose stopping criterion A>0. Step 1: Maximization step: Compute v j =(T ˆV ( ; a i ))(x j )forallx j X. Step 2: Fitting step: Using the appropriate approximation method, compute the a i+1 R m such that ˆV (x; a i+1 ) approximates the (v i,x i ) data. Step 3: If k ˆV (x; a i ) ˆV (x; a i+1 ) k< A, STOP; else go to step 1. 44

45 Convergence T is a contraction mapping ˆT may be neither monotonic nor a contraction Shape problems An instructive example Figure 3: Shape problems may become worse with value function iteration Shape-preserving approximation will avoid these instabilities 45

46 Comparisons We apply various methods to the deterministic growth model Relative L2 Errors over [0.7,1.3] N (β,γ) : (.95,-10.) (.95,-2.) (.95,-.5) (.99,-10.) (.99,-2.) (.99,-.5) Discrete model e e e e e e e e e e e e-04 Linear Interpolation 4 7.9e e e e e e e e e e e e e e e e e e-05 Cubic Spline 4 6.6e e e e e e e e e e e e e e e e e e e e e e e e-09 Polynomial (without slopes) 4 DNC 5.4e e e e e e e e e e e-09 Shape Preserving Quadratic Hermite Interpolation 4 4.7e e e e e e e e e e e e e e e e e e-08 Shape Preserving Quadratic Interpolation (ignoring slopes) 4 1.1e e e e e e e e e e e e e e e e e e-07 46

47 General Parametric Approach: Policy Iteration Basic Bellman equation: Policy iteration: V (x) = max u D(x) π(u, x)+βe{v (x+ ) x, u)} (TV)(x). Current guess: a Þnite-dimensional linear parameterization V (x). = ˆV (x; a), a R m Iteration: compute optimal policy today if ˆV (x; a) isvaluetomorrow U (x) =π u (x i,u(x),t)+β d ³ n E ˆV x + ; a o x, U (x)) du using some approximation scheme Û(x; b) Compute the value function if the policy Û(x; b) is used forever, which is solution to the linear integral equation ˆV (x; a 0 )=π(û(x; b),x)+βe{ ˆV (x + ; a 0 ) x, Û(x; b))} that can be solved by a projection method 47

48 Summary: Discretization methods Easy to implement Numerically stable Amenable to many accelerations Poor approximation to continuous problems Continuous approximation methods Can exploit smoothness in problems Possible numerical instabilities Acceleration is less possible 48

Numerical Methods in Economics

Numerical Methods in Economics Numerical Methods in Economics MIT Press, 1998 Chapter 12 Notes Numerical Dynamic Programming Kenneth L. Judd Hoover Institution November 15, 2002 1 Discrete-Time Dynamic Programming Objective: X: set

More information

NUMERICAL DYNAMIC PROGRAMMING

NUMERICAL DYNAMIC PROGRAMMING NUMERICAL DYNAMIC PROGRAMMING Kenneth L. Judd Hoover Institution and NBER July 29, 2008 1 Dynamic Programming Foundation of dynamic economic modelling Individual decisionmaking Social planners problems,

More information

Dynamic Programming with Hermite Interpolation

Dynamic Programming with Hermite Interpolation Dynamic Programming with Hermite Interpolation Yongyang Cai Hoover Institution, 424 Galvez Mall, Stanford University, Stanford, CA, 94305 Kenneth L. Judd Hoover Institution, 424 Galvez Mall, Stanford University,

More information

PROJECTION METHODS FOR DYNAMIC MODELS

PROJECTION METHODS FOR DYNAMIC MODELS PROJECTION METHODS FOR DYNAMIC MODELS Kenneth L. Judd Hoover Institution and NBER June 28, 2006 Functional Problems Many problems involve solving for some unknown function Dynamic programming Consumption

More information

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter Key References: Interpolation 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter 6. 2. Press, W. et. al. Numerical Recipes in C, Cambridge: Cambridge University Press. Chapter 3

More information

Dynamic Programming. Macro 3: Lecture 2. Mark Huggett 2. 2 Georgetown. September, 2016

Dynamic Programming. Macro 3: Lecture 2. Mark Huggett 2. 2 Georgetown. September, 2016 Macro 3: Lecture 2 Mark Huggett 2 2 Georgetown September, 2016 Three Maps: Review of Lecture 1 X = R 1 + and X grid = {x 1,..., x n } X where x i+1 > x i 1. T (v)(x) = max u(f (x) y) + βv(y) s.t. y Γ 1

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

12.0 Properties of orthogonal polynomials

12.0 Properties of orthogonal polynomials 12.0 Properties of orthogonal polynomials In this section we study orthogonal polynomials to use them for the construction of quadrature formulas investigate projections on polynomial spaces and their

More information

Lecture 4: Dynamic Programming

Lecture 4: Dynamic Programming Lecture 4: Dynamic Programming Fatih Guvenen January 10, 2016 Fatih Guvenen Lecture 4: Dynamic Programming January 10, 2016 1 / 30 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +

More information

Lecture 1: Dynamic Programming

Lecture 1: Dynamic Programming Lecture 1: Dynamic Programming Fatih Guvenen November 2, 2016 Fatih Guvenen Lecture 1: Dynamic Programming November 2, 2016 1 / 32 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +

More information

A Quick Introduction to Numerical Methods

A Quick Introduction to Numerical Methods Chapter 5 A Quick Introduction to Numerical Methods One of the main advantages of the recursive approach is that we can use the computer to solve numerically interesting models. There is a wide variety

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Lecture 10 Polynomial interpolation

Lecture 10 Polynomial interpolation Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning 76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

Projection Methods. Felix Kubler 1. October 10, DBF, University of Zurich and Swiss Finance Institute

Projection Methods. Felix Kubler 1. October 10, DBF, University of Zurich and Swiss Finance Institute Projection Methods Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 10, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch5 October 10, 2017 1 / 55 Motivation In many dynamic economic

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important

More information

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn Review Taylor Series and Error Analysis Roots of Equations Linear Algebraic Equations Optimization Numerical Differentiation and Integration Ordinary Differential Equations Partial Differential Equations

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Projection Methods. Michal Kejak CERGE CERGE-EI ( ) 1 / 29

Projection Methods. Michal Kejak CERGE CERGE-EI ( ) 1 / 29 Projection Methods Michal Kejak CERGE CERGE-EI ( ) 1 / 29 Introduction numerical methods for dynamic economies nite-di erence methods initial value problems (Euler method) two-point boundary value problems

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Fixed point iteration and root finding

Fixed point iteration and root finding Fixed point iteration and root finding The sign function is defined as x > 0 sign(x) = 0 x = 0 x < 0. It can be evaluated via an iteration which is useful for some problems. One such iteration is given

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

Basic Deterministic Dynamic Programming

Basic Deterministic Dynamic Programming Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Chapter 3. Dynamic Programming

Chapter 3. Dynamic Programming Chapter 3. Dynamic Programming This chapter introduces basic ideas and methods of dynamic programming. 1 It sets out the basic elements of a recursive optimization problem, describes the functional equation

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes). CE 025 - Lecture 6 LECTURE 6 GAUSS QUADRATURE In general for ewton-cotes (equispaced interpolation points/ data points/ integration points/ nodes). x E x S fx dx hw' o f o + w' f + + w' f + E 84 f 0 f

More information

Dynamic Programming Theorems

Dynamic Programming Theorems Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember

More information

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2. INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x

More information

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018 Numerical Analysis Preliminary Exam 0.00am.00pm, January 9, 208 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University Fixed Points and Contractive Transformations Ron Goldman Department of Computer Science Rice University Applications Computer Graphics Fractals Bezier and B-Spline Curves and Surfaces Root Finding Newton

More information

Discrete State Space Methods for Dynamic Economies

Discrete State Space Methods for Dynamic Economies Discrete State Space Methods for Dynamic Economies A Brief Introduction Craig Burnside Duke University September 2006 Craig Burnside (Duke University) Discrete State Space Methods September 2006 1 / 42

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57 T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic

More information

Function approximation

Function approximation Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i

More information

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of

More information

Numerical Integration (Quadrature) Another application for our interpolation tools!

Numerical Integration (Quadrature) Another application for our interpolation tools! Numerical Integration (Quadrature) Another application for our interpolation tools! Integration: Area under a curve Curve = data or function Integrating data Finite number of data points spacing specified

More information

Lecture Notes 10: Dynamic Programming

Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 81 Lecture Notes 10: Dynamic Programming Peter J. Hammond 2018 September 28th University of Warwick, EC9A0 Maths for Economists Peter

More information

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes Root finding: 1 a The points {x n+1, }, {x n, f n }, {x n 1, f n 1 } should be co-linear Say they lie on the line x + y = This gives the relations x n+1 + = x n +f n = x n 1 +f n 1 = Eliminating α and

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Projection. Wouter J. Den Haan London School of Economics. c by Wouter J. Den Haan

Projection. Wouter J. Den Haan London School of Economics. c by Wouter J. Den Haan Projection Wouter J. Den Haan London School of Economics c by Wouter J. Den Haan Model [ ] ct ν = E t βct+1 ν αz t+1kt+1 α 1 c t + k t+1 = z t k α t ln(z t+1 ) = ρ ln (z t ) + ε t+1 ε t+1 N(0, σ 2 ) k

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration. Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal

More information

Numerical Methods in Economics MIT Press, Chapter 9 Notes Quasi-Monte Carlo Methods. Kenneth L. Judd Hoover Institution.

Numerical Methods in Economics MIT Press, Chapter 9 Notes Quasi-Monte Carlo Methods. Kenneth L. Judd Hoover Institution. 1 Numerical Methods in Economics MIT Press, 1998 Chapter 9 Notes Quasi-Monte Carlo Methods Kenneth L. Judd Hoover Institution October 26, 2002 2 Quasi-Monte Carlo Methods Observation: MC uses random sequences

More information

In numerical analysis quadrature refers to the computation of definite integrals.

In numerical analysis quadrature refers to the computation of definite integrals. Numerical Quadrature In numerical analysis quadrature refers to the computation of definite integrals. f(x) a x i x i+1 x i+2 b x A traditional way to perform numerical integration is to take a piece of

More information

Cubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305

Cubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305 Cubic Splines Antony Jameson Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305 1 References on splines 1. J. H. Ahlberg, E. N. Nilson, J. H. Walsh. Theory of

More information

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Optimal Control. McGill COMP 765 Oct 3 rd, 2017 Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

Slides II - Dynamic Programming

Slides II - Dynamic Programming Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian

More information

Accuracy, Precision and Efficiency in Sparse Grids

Accuracy, Precision and Efficiency in Sparse Grids John, Information Technology Department, Virginia Tech.... http://people.sc.fsu.edu/ jburkardt/presentations/ sandia 2009.pdf... Computer Science Research Institute, Sandia National Laboratory, 23 July

More information

Projection Methods. (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018

Projection Methods. (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018 Projection Methods (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018 1 University of Pennsylvania 2 Boston College Introduction Remember that

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

CS412: Introduction to Numerical Methods

CS412: Introduction to Numerical Methods CS412: Introduction to Numerical Methods MIDTERM #1 2:30PM - 3:45PM, Tuesday, 03/10/2015 Instructions: This exam is a closed book and closed notes exam, i.e., you are not allowed to consult any textbook,

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.

More information

Exam in TMA4215 December 7th 2012

Exam in TMA4215 December 7th 2012 Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Contact during the exam: Elena Celledoni, tlf. 7359354, cell phone 48238584 Exam in TMA425 December 7th 22 Allowed

More information

Linear and Loglinear Approximations (Started: July 7, 2005; Revised: February 6, 2009)

Linear and Loglinear Approximations (Started: July 7, 2005; Revised: February 6, 2009) Dave Backus / NYU Linear and Loglinear Approximations (Started: July 7, 2005; Revised: February 6, 2009) Campbell s Inspecting the mechanism (JME, 1994) added some useful intuition to the stochastic growth

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation Outline Numerical Integration Numerical Differentiation Numerical Integration Numerical Differentiation 3 Michael T. Heath Scientific Computing / 6 Main Ideas Quadrature based on polynomial interpolation:

More information

Outline Today s Lecture

Outline Today s Lecture Outline Today s Lecture finish Euler Equations and Transversality Condition Principle of Optimality: Bellman s Equation Study of Bellman equation with bounded F contraction mapping and theorem of the maximum

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

ADVANCED MACRO TECHNIQUES Midterm Solutions

ADVANCED MACRO TECHNIQUES Midterm Solutions 36-406 ADVANCED MACRO TECHNIQUES Midterm Solutions Chris Edmond hcpedmond@unimelb.edu.aui This exam lasts 90 minutes and has three questions, each of equal marks. Within each question there are a number

More information

In manycomputationaleconomicapplications, one must compute thede nite n

In manycomputationaleconomicapplications, one must compute thede nite n Chapter 6 Numerical Integration In manycomputationaleconomicapplications, one must compute thede nite n integral of a real-valued function f de ned on some interval I of

More information

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2? Applied Numerical Analysis Interpolation Lecturer: Emad Fatemizadeh Interpolation Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. 0 1 4 1-3 3 9 What for x = 1.? Interpolation

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Q 0 x if x 0 x x 1. S 1 x if x 1 x x 2. i 0,1,...,n 1, and L x L n 1 x if x n 1 x x n

Q 0 x if x 0 x x 1. S 1 x if x 1 x x 2. i 0,1,...,n 1, and L x L n 1 x if x n 1 x x n . - Piecewise Linear-Quadratic Interpolation Piecewise-polynomial Approximation: Problem: Givenn pairs of data points x i, y i, i,,...,n, find a piecewise-polynomial Sx S x if x x x Sx S x if x x x 2 :

More information

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UFD. Therefore

More information

Numerical Methods I: Polynomial Interpolation

Numerical Methods I: Polynomial Interpolation 1/31 Numerical Methods I: Polynomial Interpolation Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 16, 2017 lassical polynomial interpolation Given f i := f(t i ), i =0,...,n, we would

More information

November 20, Interpolation, Extrapolation & Polynomial Approximation

November 20, Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 20, 2016 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Method of Finite Elements I

Method of Finite Elements I Method of Finite Elements I PhD Candidate - Charilaos Mylonas HIL H33.1 and Boundary Conditions, 26 March, 2018 Institute of Structural Engineering Method of Finite Elements I 1 Outline 1 2 Penalty method

More information

3.1 Interpolation and the Lagrange Polynomial

3.1 Interpolation and the Lagrange Polynomial MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate

More information

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

NUMERICAL METHODS FOR ENGINEERING APPLICATION

NUMERICAL METHODS FOR ENGINEERING APPLICATION NUMERICAL METHODS FOR ENGINEERING APPLICATION Second Edition JOEL H. FERZIGER A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Examination paper for TMA4215 Numerical Mathematics

Examination paper for TMA4215 Numerical Mathematics Department of Mathematical Sciences Examination paper for TMA425 Numerical Mathematics Academic contact during examination: Trond Kvamsdal Phone: 93058702 Examination date: 6th of December 207 Examination

More information

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method COURSE 7 3. Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method The presence of derivatives in the remainder difficulties in applicability to practical problems

More information

Lecture XI. Approximating the Invariant Distribution

Lecture XI. Approximating the Invariant Distribution Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24 SS Equilibrium in the Aiyagari model G.

More information

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring Lecture 3: Hamilton-Jacobi-Bellman Equations Distributional Macroeconomics Part II of ECON 2149 Benjamin Moll Harvard University, Spring 2018 1 Outline 1. Hamilton-Jacobi-Bellman equations in deterministic

More information

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a TE Linear Algebra and Numerical Methods Tutorial Set : Two Hours. (a) Show that the product AA T is a symmetric matrix. (b) Show that any square matrix A can be written as the sum of a symmetric matrix

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK KINGS COLLEGE OF ENGINEERING MA5-NUMERICAL METHODS DEPARTMENT OF MATHEMATICS ACADEMIC YEAR 00-0 / EVEN SEMESTER QUESTION BANK SUBJECT NAME: NUMERICAL METHODS YEAR/SEM: II / IV UNIT - I SOLUTION OF EQUATIONS

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

An Application to Growth Theory

An Application to Growth Theory An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x

More information

Value Function Iteration as a Solution Method for the Ramsey Model

Value Function Iteration as a Solution Method for the Ramsey Model Value Function Iteration as a Solution Method for the Ramsey Model BURKHARD HEER ALFRED MAUßNER CESIFO WORKING PAPER NO. 2278 CATEGORY 10: EMPIRICAL AND THEORETICAL METHODS APRIL 2008 An electronic version

More information

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20 Exam 2 Average: 85.6 Median: 87.0 Maximum: 100.0 Minimum: 55.0 Standard Deviation: 10.42 Fall 2011 1 Today s class Multiple Variable Linear Regression Polynomial Interpolation Lagrange Interpolation Newton

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004 Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

Stochastic Shortest Path Problems

Stochastic Shortest Path Problems Chapter 8 Stochastic Shortest Path Problems 1 In this chapter, we study a stochastic version of the shortest path problem of chapter 2, where only probabilities of transitions along different arcs can

More information