Outline Today s Lecture
|
|
- Frederica Hodge
- 5 years ago
- Views:
Transcription
1 Outline Today s Lecture finish Euler Equations and Transversality Condition Principle of Optimality: Bellman s Equation Study of Bellman equation with bounded F contraction mapping and theorem of the maximum Introduction to Dynamic Optimization Nr. 2
2 Infinite Horizon T = subject to, with x 0 given V (x 0 )= sup {x t+1 } t=0 X β t F (x t,x t+1 ) t=0 x t+1 Γ (x t ) (1) sup {} instead of max {} define xt+1ª 0 as a plan t=0 define Π (x 0 ) ª x 0 ª t+1 t=0 x0 t+1 Γ (x 0 t) and x 0 0 = x 0 Introduction to Dynamic Optimization Nr. 3
3 Assumptions A1. Γ (x) is non-empty for all x X A2. lim T P T t=0 βt F (x t,x t+1 ) exists for all x Π (x 0 ) then problem is well defined Introduction to Dynamic Optimization Nr. 4
4 Recursive Formulation: Bellman Equation value function satisfies V (x 0 ) = max {x t+1 } t=0 x t+1 Γ(x t ) = max x 1 Γ(x 0 ) = max x 1 Γ(x 0 ) ( X ) β t F (x t,x t+1 ) t=0 F (x 0,x 1 )+ F (x 0,x 1 )+β max {x t+1 } t=1 x t+1 Γ(x t ) max {x t+1 } t=1 x t+1 Γ(x t ) = max x 1 Γ(x 0 ) {F (x 0,x 1 )+βv (x 1 )} X β t F (x t,x t+1 ) X β t F (x t+1,x t+2 ) t=1 t=0 Introduction to Dynamic Optimization Nr. 5
5 idea: use BE to find value function and V p olicy f unction g [Principle of Optimality] Intro duction to Dynamic Optimization Nr. 5A
6 Outline Today s Lecture housekeeping: ps#1 and recitation day/ theory general / web page finish Principle of Optimality: Sequence Problem (for values and plans) solution to Bellman Equation begin study of Bellman equation with bounded and continuous F tools: contraction mapping and theorem of the maximum Introduction to Dynamic Optimization Nr. 2
7 Sequence Problem vs. Functional Equation Sequence Problem: (SP) X V (x 0 ) = sup β t F (x t,x t+1 ) {x t+1 } t=0 t=0 s.t. x t+1 Γ (x t ) x 0 given... more succinctly V (x 0 )= sup u ( x) x Π(x 0 ) ( SP) functional equation (FE) [this particular FE called Bellman Equation] V (x) = max {F (x, y)+ βv (y)} (FE) y Γ(x) Introduction to Dynamic Optimization Nr. 3
8 Principle of Optimality IDEA: use BE to find value function V and optimal plan x Thm 4.2. V defined by SP V solves FE Thm 4.3. V solves FE and... V = V Thm 4.4. x Π (x 0 ) is optimal V (x t ) = F x t,x t+1 + βv x t+1 Thm 4.5. x Π (x 0 ) satisfies V (x t ) = F x t,x t+1 and... x is optimal + βv x t+1 Introduction to Dynamic Optimization Nr. 4
9 Why isthis Progress? intuition: breaks planning horizon into two: now and then notation: reduces unnecessary notation (esp ecially with uncertainty) analysis: prove existence, uniqueness and properties of optimal policy (e.g. continuity, montonicity, etc...) computation: associated numerical algorithm are powerful for many applications Introduction to Dynamic Optimization Nr. 5
10 Proof oftheorem 4.3 (max case) Assume for any x Π (x 0 ) BE implies Substituting V (x 1 ): lim T βt V (x T )=0. V (x 0 ) F (x 0,x 1 )+ βv (x 1 ), all x 1 Γ (x 0 ) = F (x 0,x 1)+ βv (x 1 ), some x 1 Γ (x 0 ) V (x 0 ) F (x 0,x 1 )+ βf (x 1,x 2 )+ β 2 V (x 2 ), all x Π (x 0 ) = F (x 0,x 1 )+ βf (x 1,x 2)+ β 2 V (x 2), some x Π (x 0 ) Introduction to Dynamic Optimization Nr. 6
11 Continue this way V (x 0 ) = nx β t F (x t,x t+1 )+ β n+1 V (x n+1 ) for all x Π (x 0 ) t=0 nx t=0 β t F x t,x t+1 + β n+1 V x n+1 for some x Π (x 0 ) Since β T V (x T ) 0, taking the limit n on both sides of both expressions we conclude that: V (x 0 ) u ( x) for all x Π (x 0 ) V (x 0 ) = u ( x ) for some x Π (x 0 ) Introduction to Dynamic Optimization Nr. 7
12 Bellman Equation as a Fixed Point define operator T (f)(x) = max {F (x, y)+ βf (y)} y Γ(x) V solution of BE V fixed point of T [i.e. TV = V ] Bounded Returns: if kf k < B and F and Γ are continuos: T maps continuous bounded functions into continuous bounded functions bounded returns T is a Contraction Mapping unique fixed point many other bonuses Introduction to Dynamic Optimization Nr. 8
13 Needed Tools Basic Real Analysis (section 3.1): {vector, metric, noslp, complete} spaces cauchy sequences closed, compact, bounded sets Contraction Mapping Theorem (section 3.2) Theorem of the Maximum: study of RHS of Bellman equation (equivalently of T ) (section 3.3) Introduction to Dynamic Optimization Nr. 9
14 Bellman Equation: Principle of Optimality Principle of Optimality idea: use the functional equation to find V and g V (x) = max {F (x, y)+βv (y)} y Γ(x) note: nuisance subscripts t, t +1, dropped a solution is a function V ( ) thesameonbothsides IF BE has unique solution then V = V more generally the right solution to (BE) delivers V Introduction to Dynamic Optimization Nr. 6
15 Outline Today s Lecture study Functional Equation (Bellman equation) with bounded and continuous F tools: contraction mapping and theorem of the maximum Introduction to Dynamic Optimization Nr. 2
16 Bellman Equation as a Fixed Point define operator T (f)(x) = max {F (x, y)+ βf (y)} y Γ(x) V solution of BE V fixed point of T [i.e. TV = V ] Bounded Returns: if kf k < B and F and Γ are continuous: T maps continuous b ounded functions into continuous b ounded functions bounded returns T is a Contraction Mapping unique fixed point many other bonuses Introduction to Dynamic Optimization Nr. 3
17 Our Favorite Metric Space with S = ½ ¾ f : X R, f is continuous, and kf k sup f (x) < x X ρ (f, g) = kf gk sup f (x) g (x) x X Definition. A linear space S is complete if any Cauchy sequence converges. For a definition of a Cauchy sequence and examples of complete metric spaces see SLP. Theorem. The set of b ounded and continuous functions is Complete. See SLP. Introduction to Dynamic Optimization Nr. 4
18 Contraction Mapping Definition. Let (S, ρ) be a metric space. Let T : S S be an operator. T is a contraction with modulus β (0, 1) for any x, y in S. ρ (Tx, T y) βρ (x, y) Introduction to Dynamic Optimization Nr. 5
19 Contraction Mapping Theorem Theorem (CMThm). If T is a contraction in (S, ρ) with modulus β, then (i) there is a unique fixed point s S, s = Ts and (ii) iterations of T converge to the fixed point ρ (T n s 0,s ) β n ρ (s 0,s ) for any s 0 S, where T n+1 (s) = T (T n (s)). Introduction to Dynamic Optimization Nr. 6
20 CMThm Proof for (i) 1st step: construct fixed point s take any s 0 S define {s n } by s n+1 = Ts n then ρ (s 2,s 1 )= ρ (Ts 1,T s 0 ) βρ (s 1,s 0 ) generalizing ρ (s n+1,s n ) β n ρ (s 1,s 0 ) then, for m >n ρ (s m,s n ) ρ (s m,s m 1 )+ ρ (s m 1,s m 2 ) ρ (s n+1,s n ) β m 1 + β m β n ρ (s 1,s 0 ) β n β m n 1 + β m n ρ (s 1,s 0 ) β n 1 β ρ (s 1,s 0 ) thus {s n } is cauchy. hence s n s Introduction to Dynamic Optimization Nr. 7
21 2nd step: show s = Ts ρ (Ts,s ) ρ (Ts,s n )+ ρ (s,s n ) βρ (s,s n 1 )+ ρ (s,s n ) 0 3nd step: s is unique. Ts 1 = s 1 and s 2 = Ts 2 0 a = ρ (s 1,s 2 ) = ρ (Ts 1,T s 2) βρ (s 1,s 2) = βa only possible if a = 0 s 1 = s 2. Finally, as for (ii): ρ (T n s 0,s )= ρ (T n s 0,T s ) βρ T n 1 s 0,s β n ρ (s 0,s ) Introduction to Dynamic Optimization Nr. 8
22 Corollary. Let S be a complete metric space, let S 0 S and S 0 close. Let T be a contraction on S and let s = Ts. Assume that T (S 0 ) S 0, i.e. if s 0 S, then T (s 0 ) S 0 then s S 0. Moreover, if S 00 S 0 and then s S 00. T (S 0 ) S 00, i.e. if s 0 S 0, then T (s 0 ) S 00 Introduction to Dynamic Optimization Nr. 9
23 Blackwell s sufficient conditions. Let S be the space of bounded functions on X, andk k be given by the sup norm. Let T : S S. Assume that (i) T is monotone, that is, Tf (x) Tg(x) for any x X and g, f such that f (x) g (x) for all x X, and (ii) T discounts, that is, there is a β (0, 1) such that for any a R +, T (f + a)(x) Tf (x)+aβ for all x X. Then T is a contraction. Introduction to Dynamic Optimization Nr. 10
24 Proof. By definition f = g + f g and using the definition of k k, f (x) g (x)+kf gk then by monotonicity i) Tf T (g + kf gk) and by discounting ii) setting a = kf gk Tf T (g)+β kf gk. Reversing the roles of f and g : Tg T (f)+β kf gk ktf Tgk β kf gk Introduction to Dynamic Optimization Nr. 11
25 Bellman equation application (Tv)(x) = max {F (x, y)+βv (y)} y Γ(x) Assume that F is bounded and continuous and that Γ is continuous and has compact range. Theorem. T maps the set of continuous and bounded functions S into itself. Moreover T is a contraction. Introduction to Dynamic Optimization Nr. 12
26 Proof. That T maps the set of continuos and bounded follow from the Theorem of Maximum (we do this next) That T is a contraction follows since T satisfies the Blackwell sufficient conditions. T satisfies the Blackwell sufficient conditions. For monotonicity, notice that for f v Tv(x) = max {F (x, y)+βv (y)} y Γ(x) = F (x, g (x)) + βv (g (x)) {F (x, g (y)) + βf (g (x))} max {F (x, y)+βf (y)} = Tf (x) y Γ(x) A similar argument follows for discounting: for a>0 T (v + a)(x) = max {F (x, y)+β (v (y)+a)} y Γ(x) = max {F (x, y)+βv (y)} + βa = T (v)(x)+βa. y Γ(x) Introduction to Dynamic Optimization Nr. 13
27 Theorem of the Maximum wa n tt to map continuous function into continuous functions (Tv)(x) = max {F (x, y)+βv (y)} y Γ(x) want to learn about optimal policy of RHS of Bellman G (x) =arg max {F (x, y)+βv (y)} y Γ(x) First, continuity concepts for correspondences... then, a few example maximizations... finally, Theorem of the Maximum Introduction to Dynamic Optimization Nr. 14
28 Continuity Notions for Correspondences assume Γ is non-empty and compact valued (the set Γ (x) is non empty and compact for all x X) Upper Hemi Continuity (u.h.c.) at x: for any pair of sequences {x n } and {y n } with x n x and x n Γ (y n ) there exists a subsequence of {y n } that converges to a point y Γ (x). Lower Hemi Continuity (l.h.c.) at x: for any sequence {x n } with x n x and for every y Γ (x) there exists a sequence {y n } with x n Γ (y n ) such that y n y. Continuous at x: if Γ is both upper and lower hemi continuous at x Introduction to Dynamic Optimization Nr. 15
29 Max Examples h (x) = max f (x, y) y Γ(x) G (x) = arg max f (x, y) y Γ(x) ex 1: f (x, y) =xy; X =[ 1, 1] ; Γ (x) =X. G (x) = h (x) = x { 1} x<0 [ 1, 1] x =0 {1} x>0 Introduction to Dynamic Optimization Nr. 16
30 ex 2: f (x,y ) =xy 2 ;X = [ 1, 1] ; Γ (x) =X G (x) = h (x) = max{0,x} {0} x<0 [ 1, 1] x =0 { 1, 1} x>0 Intro duction to Dynamic Optimization Nr. 16A
31 Theorem of the Maximum Define: h (x) = max f (x, y) y Γ(x) G (x) = arg max f (x, y) y Γ(x) = {y Γ (x) : h (x) =f (x, y)} Theorem. (Berge) Let X R l and Y R m. Let f : X Y R be continuous and Γ : X Y be compact-valued and continuous. Then h : X R is continuous and G : X Y is non-empty, compact valued, and u.h.c. Introduction to Dynamic Optimization Nr. 17
32 lim max max lim Theorem. Suppose {f n (x, y)} and f (x, y) are concave in y and f n f in the sup-norm (uniformly). Define g n (x) = arg max n (x, y) y Γ(x) g (x) = arg max f (x, y) y Γ(x) then g n (x) g (x) for all x (pointwise convergence); if X is compact then the convergence is uniform. Introduction to Dynamic Optimization Nr. 18
33 UsesofCorollaryofCMThm Monotonicity of v Theorem. Assume that F (,y) is increasing, that Γ is increasing, i.e. Γ (x) Γ (x 0 ) for x x 0. Then, the unique fixed point v satisfying v = Tv is increasing. If F (,y) is strictly increasing, so is v. Introduction to Dynamic Optimization Nr. 19
34 Proof BythecorollaryoftheCMThm,itsuffices to show Tf is increasing if f is increasing. Let x x 0 : Tf (x) = max {F (x, y)+βf (y)} y Γ(x) = F (x, y )+βf (y ) for some y Γ (x) F (x 0,y )+βf (y ) since y Γ (x) Γ (x 0 ) If F (,y) is strictly increasing max y Γ(x 0 ) {F (x, y)+βf (y)} = Tf (x0 ) F (x, y )+βf (y ) <F(x 0,y )+βf (y ). Introduction to Dynamic Optimization Nr. 20
35 Concavity (or strict) concavity of v Theorem. Assume that X is convex, Γ is concave, i.e. y Γ (x), y 0 Γ (x 0 ) implies that y θ θy 0 +(1 θ) y Γ (θx 0 +(1 θ) x) Γ x θ for any x, x 0 X and θ (0, 1). Finally assume that F is concave in (x, y). Then, the fixed point v satisfying v = Tv is concave in x. Moreover, if F (,y) is strictly concave, so is v. Introduction to Dynamic Optimization Nr. 21
36 Differentiability can t use same strategy:space of differentiable functions is not closed many envelope theorems Formula: if h (x) is differentiable and y is interior then h 0 (x) =f x (x, y) right value... but is h differentiable? one answer (Demand Theory) relies on f.o.c. and assuming twice differentiability of f won t work for us since f = F (x, y)+βv (y) and we don t even know if f is once differentiable! going in circles Introduction to Dynamic Optimization Nr. 22
37 Benveniste and Sheinkman First a Lemma... Lemma. Suppose v (x) is concave and that there exists w (x) such that w (x) v (x) and v (x 0 )=w(x 0 ) in some neighborhood D of x 0 and w is differentiable at x 0 (w 0 (x 0 ) exists) then v is differentiable at x 0 and v 0 (x 0 )= w 0 (x 0 ). Proof. Since v is concave it has at least one subgradient p at x 0 : w (x) w (x 0 ) v (x) v (x 0 ) p (x x 0 ) Thus a subgradient of v is also a subgradient of w. But w has a unique subgradient equal to w 0 (x 0 ). Introduction to Dynamic Optimization Nr. 23
38 Benveniste and Sheinkman Now a Theorem Theorem. Suppose F is strictly concave and Γ is convex. If x 0 int (X) and g (x 0 ) int (Γ (x 0 )) then the fixed point of T, V, is differentiable at x and V 0 (x) =F x (x, g (x)) Proof. We know V is concave. Since x 0 int (X ) and g (x 0 ) int (Γ (x 0 )) then g (x 0 ) int (Γ (x)) for x D a neighborhood of x 0 then W (x) =F (x, g (x 0 )) + βv (g (x 0 )) and t henw (x) V (x) and W (x 0 )=V (x 0 ) and W 0 (x 0 )=F x (x 0,g(x 0 )) so the r esult f ollows from the lemma. Introduction to Dynamic Optimization Nr. 24
39 Recursive Methods Introduction to Dynamic Optimization Nr. 1
40 Outline Today s Lecture finish off: theorem of the maximum Bellman equation with bounded and continuous F differentiability of value function application: neoclassical growth model homogenous and unbounded returns, more applications Introduction to Dynamic Optimization Nr. 2
41 Our Favorite Metric Space with S = ½ ¾ f : X R, f is continuous, and kf k sup f (x) < x X ρ (f, g) = kf gk sup f (x) g (x) x X (Tv)(x) = max {F (x, y)+ βv (y)} y Γ(x) Assume that F is bounded and continuous and that Γ is continuous and has compact range. Theorem 4.6. T maps the set of continuous and bounded functions S into itself. Moreover T is a contraction. Introduction to Dynamic Optimization Nr. 3
42 Proo f. That T maps the s et of continuous a nd bo unded f ollo w f rom the Theorem of Maximum (we do this next) That T is a contraction Blackwell sufficient conditions monotonicity, notice that for f v Tv (x) = max {F (x, y)+ βv (y)} y Γ(x) = F (x, g (x)) + βv (g (x)) {F (x, g (y)) + βf (g (x))} discounting: for a > 0 max {F (x, y)+ βf (y)} = Tf (x) y Γ(x) T (v + a)(x) = max {F (x, y)+ β (v (y)+ a)} y Γ(x) = m ax y Γ(x) {F (x, y)+ βv (y)} + βa = T (v)(x)+ βa. Introduction to Dynamic Optimization Nr. 4
43 Theorem of the Maximum wan t T to map c ontinuous f unction into continuous f unctions (Tv)(x) = max {F (x, y)+ βv (y)} y Γ(x) want to learn about optimal policy of RHS of Bellman G (x) = arg max {F (x, y)+ βv (y)} y Γ(x) First, continuity concepts for correspondences... then, a few example maximizations... finally, Theorem of the Maximum Introduction to Dynamic Optimization Nr. 5
44 Continuity Notions for Correspondences assume Γ is non-empty and compact valued (the set Γ (x) is non empty and compact for all x X) Upper Hemi Continuity (u.h.c.) at x: for any pair of sequences {x n } and {y n } with x n x and x n Γ (y n ) there exists a subsequence of {y n } that converges to a point y Γ (x). Lower Hemi Continuity (l.h.c.) at x: for any sequence {x n } with x n x and for every y Γ (x) there exists a sequence {y n } with x n Γ (y n ) such that y n y. Continuous at x: if Γ is both upper and lower hemi continuous at x Introduction to Dynamic Optimization Nr. 6
45 Max Examples h (x) = max f (x, y) y Γ(x) G (x) = arg max f (x, y) y Γ(x) ex 1: f (x, y) = xy; X = [ 1, 1] ; Γ (x) = X. G (x) = h (x) = x { 1} x< 0 [ 1, 1] x = 0 {1} x> 0 Introd uction to Dynamic O ptimization N r. 7
46 ex 2: f (x, y) = xy 2 ; X =[ 1, 1] ; Γ (x) = X {0} x< 0 G (x) = [ 1, 1] x = 0 { 1, 1} x> 0 h (x) = max {0,x} Introd uction to Dynamic O ptimization Nr. 7A
47 Theorem of the Maximum Define: h (x) = max f (x, y) y Γ(x) G (x) = arg max f (x, y) y Γ(x) = {y Γ (x) : h (x) = f (x, y)} Theorem 3.6. (Berge) Let X R l and Y R m. Let f : X Y R be continuous and Γ : X Y be compact-valued and continuous. Then h : X R is continuous and G : X Y is non-empty, compact valued, and u.h.c. Introduction to Dynamic Optimization Nr. 8
48 lim max max lim Theorem 3.8. Suppose {f n (x, y)} and f (x, y) are concave in y that and Γ is convex and compact valued. Then if f n f in the sup-norm (uniformly). Define g n (x) = arg max y Γ(x) f n (x, y) g (x) = arg max f (x, y) y Γ(x) then g n (x) g (x) for all x (pointwise convergence); if X is compact then the convergence is uniform. Introduction to Dynamic Optimization Nr. 9
49 Uses of Corollary ofcmthm Monotonicity of v Theorem 4.7. Assume that F (,y) is increasing, that Γ is increasing, i.e. Γ (x) Γ (x 0 ) for x x 0. Then, the unique fixed point v satisfying v = Tv is increasing. If F (,y) is strictly increasing, so is v. Introduction to Dynamic Optimization Nr. 10
50 Proof By the corollary of the CMThm, it suffices to show Tf is increasing if f is increasing. Let x x 0 : Tf (x) = max {F (x, y)+ βf (y)} y Γ(x) since y Γ (x) Γ (x 0 ) If F (,y) is strictly increasing = F (x, y )+ βf (y ) for some y Γ (x) F (x 0,y )+ βf (y ) max y Γ(x 0 ) {F (x, y)+ βf (y)} = Tf (x 0 ) F (x, y )+ βf (y ) < F (x 0,y )+ βf (y ). Introduction to Dynamic Optimization Nr. 11
51 Concavity (or strict) concavity of v Theorem 4.8. Assume that X is convex, Γ is concave, i.e. y Γ (x), y 0 Γ (x 0 ) implies that y θ θy 0 +(1 θ) y Γ (θx 0 +(1 θ) x) Γ x θ for any x, x 0 X and θ (0, 1). Finally assume that F is concave in (x, y). Then, the fixed point v satisfying v = Tv is concave in x. Moreover, if F (,y) is strictly concave, so is v. Introduction to Dynamic Optimization Nr. 12
52 convergence of policy functions with concavity of F and convexity of Γ optimal policy correspondence G (x) is actually a continuous function g (x) since v n v uniformly g n g (Theorem 4.8) we can use this to derive comparative statics Introduction to Dynamic Optimization Nr. 13
53 Differentiability can t use same strategy as with monotonicty or concavity: space of differentiable functions is not closed many envelope theorems, imply differentiability of h h (x) = max f (x, y) y Γ(x) always if formula: if h (x) is differentiable and there exists a y int (Γ (x)) then h 0 (x) = f x (x, y)...but is h differentiable? Introd uction to Dynamic O ptimization N r. 14
54 one a pp roach ( e.g. Demand Theo ry) r elies o n s moothness of Γ and f (twice differentiability) use f.o.c. and implicit function theorem won t w ork f or u s s in ce f (x, y) = F (x, y)+ βv (y) don t k no w i f f is once differentiable yet! going in circles...
55 Benveniste and Sheinkman First a Lemma... Lemma. Suppose v (x) is concave and that there exists w (x) such that w (x) v (x) and v (x 0 )= w (x 0 ) in some neighborhood D of x 0 and w is differentiable at x 0 (w 0 (x 0 ) exists) then v is differentiable at x 0 and v 0 (x 0 )= w 0 (x 0 ). Proo f. Since v is concave i t h as at least o ne subgradient p at x 0 : w (x) w (x 0 ) v (x) v (x 0 ) p (x x 0 ) Thus a subgradient of v is also a subgradient of w. But w has a unique subgradient equal to w 0 (x 0 ). Introduction to Dynamic Optimization Nr. 15
56 Benveniste and Sheinkman Now a Theorem Theorem. Suppose F is strictly concave and Γ is convex. If x 0 int (X) and g (x 0 ) int (Γ (x 0 )) then the fixed point of T, V, is differentiable at x and V 0 (x) = F x (x, g (x)) Proof. We know V is concave. Since x 0 int (X) and g (x 0) int (Γ (x 0)) then g (x 0 ) int (Γ (x )) fo r x D a n e i g h bo rh ood o f x 0 then W (x) = F (x, g (x 0 )) + βv (g (x 0 )) and then W (x ) V (x ) and W (x 0 )= V (x 0 ) and W 0 (x 0 )= F x (x 0,g (x 0 )) so the r esult follo ws from the l emma. Introduction to Dynamic Optimization Nr. 16
57 Recursive Methods Recursive Methods Nr. 1
58 Outline Today s Lecture Anything goes : Boldrin Montrucchio Global Stability: Liapunov functions Linear Dynamics Local Stability: Linear Approximation of Euler Equations Recursive Methods Nr. 2
59 Anything Goes treat X = [0, 1] R case for simplicity take any g (x) :[0, 1] [0, 1] that is twice continuously differentiable on [0, 1] g 0 (x) and g 00 (x) exists and are bounded define W (x, y) = 1 2 y 2 + yg (x) L 2 x 2 Lemma: W is strictly concave for large enough L Recursive Methods Nr. 3
60 Proof W (x, y) = 1 2 y2 + yg (x) L 2 x2 W 1 W 2 = yg 0 (x) Lx = y + g (x) W 11 W 22 W 12 = yg 00 (x) L = 1 = g 0 (x) thus W 22 < 0; W 11 < 0 is satisfied if L max x g 00 (x) W 11 W 22 W 12 W 21 = yg 00 (x)+ L g 0 (x) 2 > 0 L>g 0 (x) 2 + yg 00 (x) then L > [max x g 0 (x) ] 2 +max x g 00 (x) will do. Recursive Methods Nr. 4
61 Decomposing W (in a concave way) define V (x) = W (x, g (x)) and F so that W (x, y) = F (x, y)+ βv (y) that is F (x, y) = W (x, y) βv (y). Lemma: V is strictly concave Pro of: immediate since W is concave and X is convex. Computing the second derivative is useful anyway: V 00 (x) = g 00 (x) g (x)+ g 0 (x) 2 L since g [0, 1] then clearly our bound on L implies V 00 (x) < 0. Recursive Methods Nr. 5
62 Concavity of F Lemma: F is concave for β h 0, β i for some β > 0 F 11 (x, y) = W 11 (x, y) = yg 00 (x) L F 12 (x, y) = W 12 (x, y) = 1 F 22 (x, y) = W 22 βv 22 = 1 β h i g 00 (x) g (x)+ g 0 (x) 2 L (yg 00 (x) L) F 11 F 22 F 12 2 > 0 ³ h i 1 β g 00 (x) g (x)+ g 0 (x) 2 L g 0 (x) 2 > 0 Recursive Methods Nr. 6
63 ... concavity of F Let η 1 (β) = min η 2 (β) = min x,y x,y ( F 22) F11 F 22 F 12 2 η (β) = min {H 1 (β),h 2 (β)} 0 for β =0 η (β) > 0. η is continuous (Theorem of the Maximum) exists β h > 0 such that H (β) 0 for all β 0, β i. Recursive Methods Nr. 7
64 Monotonicity Use W (x, y) = 1 2 y 2 + yg (x) L 1 2 x 2 + L 2 x L 2 does not affect second derivatives claim: F is monotone for large enough L 2 Recursive Methods Nr. 8
Recursive Methods. Introduction to Dynamic Optimization
Recursive Methods Nr. 1 Outline Today s Lecture finish off: theorem of the maximum Bellman equation with bounded and continuous F differentiability of value function application: neoclassical growth model
More informationLecture 5: The Bellman Equation
Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think
More informationRecursive Metho ds. Introduction to Dynamic Optimization Nr
1 Recursive Metho ds Introduction to Dynamic Optimization Nr. 1 2 Outline Today s Lecture finish Euler Equations and Transversality Condition Principle of Optimality: Bellman s Equation Study of Bellman
More informationMacro 1: Dynamic Programming 1
Macro 1: Dynamic Programming 1 Mark Huggett 2 2 Georgetown September, 2016 DP Warm up: Cake eating problem ( ) max f 1 (y 1 ) + f 2 (y 2 ) s.t. y 1 + y 2 100, y 1 0, y 2 0 1. v 1 (x) max f 1(y 1 ) + f
More informationStochastic Dynamic Programming: The One Sector Growth Model
Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31 References
More informationDynamic Programming Theorems
Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember
More informationContents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57
T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic
More informationEconomics 8105 Macroeconomic Theory Recitation 3
Economics 8105 Macroeconomic Theory Recitation 3 Conor Ryan September 20th, 2016 Outline: Minnesota Economics Lecture Problem Set 1 Midterm Exam Fit Growth Model into SLP Corollary of Contraction Mapping
More informationDYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION
DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal
More informationStochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania
Stochastic Dynamic Programming Jesus Fernande-Villaverde University of Pennsylvania 1 Introducing Uncertainty in Dynamic Programming Stochastic dynamic programming presents a very exible framework to handle
More informationThe Principle of Optimality
The Principle of Optimality Sequence Problem and Recursive Problem Sequence problem: Notation: V (x 0 ) sup {x t} β t F (x t, x t+ ) s.t. x t+ Γ (x t ) x 0 given t () Plans: x = {x t } Continuation plans
More informationBasic Deterministic Dynamic Programming
Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP
More informationDeterministic Dynamic Programming
Chapter 3 Deterministic Dynamic Programming 3.1 The Bellman Principle of Optimality Richard Bellman (1957) states his Principle of Optimality in full generality as follows: An optimal policy has the property
More informationADVANCED MACROECONOMIC TECHNIQUES NOTE 3a
316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important
More informationEconomics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011
Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Section 2.6 (cont.) Properties of Real Functions Here we first study properties of functions from R to R, making use of the additional structure
More informationLecture 4: The Bellman Operator Dynamic Programming
Lecture 4: The Bellman Operator Dynamic Programming Jeppe Druedahl Department of Economics 15th of February 2016 Slide 1/19 Infinite horizon, t We know V 0 (M t ) = whatever { } V 1 (M t ) = max u(m t,
More informationEconomics 2010c: Lecture 2 Iterative Methods in Dynamic Programming
Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming David Laibson 9/04/2014 Outline: 1. Functional operators 2. Iterative solutions for the Bellman Equation 3. Contraction Mapping Theorem
More informationTime is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.
Chapter 0 Discrete Time Dynamic Programming 0.1 The Finite Horizon Case Time is discrete and indexed by t =0; 1;:::;T,whereT
More informationECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko
ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption
More informationSlides II - Dynamic Programming
Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian
More informationAn Application to Growth Theory
An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x
More informationSocial Welfare Functions for Sustainable Development
Social Welfare Functions for Sustainable Development Thai Ha-Huy, Cuong Le Van September 9, 2015 Abstract Keywords: criterion. anonymity; sustainable development, welfare function, Rawls JEL Classification:
More information[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ]
Homework #2 Economics 4- Due Wednesday, October 5 Christiano. This question is designed to illustrate Blackwell's Theorem, Theorem 3.3 on page 54 of S-L. That theorem represents a set of conditions that
More informationOn optimal growth models when the discount factor is near 1 or equal to 1
On optimal growth models when the discount factor is near 1 or equal to 1 Cuong Le Van, Lisa Morhaim To cite this version: Cuong Le Van, Lisa Morhaim. On optimal growth models when the discount factor
More information1 Jan 28: Overview and Review of Equilibrium
1 Jan 28: Overview and Review of Equilibrium 1.1 Introduction What is an equilibrium (EQM)? Loosely speaking, an equilibrium is a mapping from environments (preference, technology, information, market
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationReinforcement Learning
Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.
More informationNotes for Functional Analysis
Notes for Functional Analysis Wang Zuoqin (typed by Xiyu Zhai) September 29, 2015 1 Lecture 09 1.1 Equicontinuity First let s recall the conception of equicontinuity for family of functions that we learned
More informationHOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.
Econ 50a (second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK # This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.. Consider a growth model with capital
More information1 Stochastic Dynamic Programming
1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future
More informationLecture 4: Optimization. Maximizing a function of a single variable
Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationEconomics 204 Fall 2011 Problem Set 2 Suggested Solutions
Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit
More informationLecture 7 Monotonicity. September 21, 2008
Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationSummer Jump-Start Program for Analysis, 2012 Song-Ying Li
Summer Jump-Start Program for Analysis, 01 Song-Ying Li 1 Lecture 6: Uniformly continuity and sequence of functions 1.1 Uniform Continuity Definition 1.1 Let (X, d 1 ) and (Y, d ) are metric spaces and
More informationDepartment of Economics Working Paper Series
Department of Economics Working Paper Series On the Existence and Characterization of Markovian Equilibrium in Models with Simple Non-Paternalistic Altruism Olivier F. Morand University of Connecticut
More informationUniversity of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of
More informationOptimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.
Paris. Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.
More information1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method
L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order
More informationDivision of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45
Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.
More informationConvex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples
More informationA LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION
A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u
More information1 Recursive Formulations
1 Recursive Formulations We wish to solve the following problem: subject to max {a t,x t} T β t r(x t,a t (1 a t Γ(x t (2 x t+1 = t(x t (3 x 0 given. (4 The horizon length, T, can be infinite (in fact,
More informationG Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic programming and applications
G6215.1 - Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic Contents 1 The Ramsey growth model with technological progress 2 1.1 A useful lemma..................................................
More informationECON 2010c Solution to Problem Set 1
ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following
More informationStability and Sensitivity of the Capacity in Continuous Channels. Malcolm Egan
Stability and Sensitivity of the Capacity in Continuous Channels Malcolm Egan Univ. Lyon, INSA Lyon, INRIA 2019 European School of Information Theory April 18, 2019 1 / 40 Capacity of Additive Noise Models
More informationMathematical Preliminaries for Microeconomics: Exercises
Mathematical Preliminaries for Microeconomics: Exercises Igor Letina 1 Universität Zürich Fall 2013 1 Based on exercises by Dennis Gärtner, Andreas Hefti and Nick Netzer. How to prove A B Direct proof
More information6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games
6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence
More informationProblem Set 2: Proposed solutions Econ Fall Cesar E. Tamayo Department of Economics, Rutgers University
Problem Set 2: Proposed solutions Econ 504 - Fall 202 Cesar E. Tamayo ctamayo@econ.rutgers.edu Department of Economics, Rutgers University Simple optimal growth (Problems &2) Suppose that we modify slightly
More informationMetric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg
Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose
More information1 Stochastic Dynamic Programming
1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future
More informationPreliminary draft only: please check for final version
ARE211, Fall2012 ANALYSIS6: TUE, SEP 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 6) Contents 1. Analysis (cont) 1 1.9. Continuous Functions 1 1.9.1. Weierstrass (extreme value) theorem 3 1.10. An inverse image
More informationConvex Analysis and Economic Theory AY Elementary properties of convex functions
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally
More informationNumerical Methods in Economics
Numerical Methods in Economics MIT Press, 1998 Chapter 12 Notes Numerical Dynamic Programming Kenneth L. Judd Hoover Institution November 15, 2002 1 Discrete-Time Dynamic Programming Objective: X: set
More informationUniform turnpike theorems for finite Markov decision processes
MATHEMATICS OF OPERATIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0364-765X eissn 1526-5471 00 0000 0001 INFORMS doi 10.1287/xxxx.0000.0000 c 0000 INFORMS Authors are encouraged to submit
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationIntroduction to Dynamic Programming Lecture Notes
Introduction to Dynamic Programming Lecture Notes Klaus Neusser November 30, 2017 These notes are based on the books of Sargent (1987) and Stokey and Robert E. Lucas (1989). Department of Economics, University
More informationUNIVERSITY OF VIENNA
WORKING PAPERS Cycles and chaos in the one-sector growth model with elastic labor supply Gerhard Sorger May 2015 Working Paper No: 1505 DEPARTMENT OF ECONOMICS UNIVERSITY OF VIENNA All our working papers
More information1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation
POINTWISE C 2,α ESTIMATES AT THE BOUNDARY FOR THE MONGE-AMPERE EQUATION O. SAVIN Abstract. We prove a localization property of boundary sections for solutions to the Monge-Ampere equation. As a consequence
More information3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions
3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions
More informationConvex Functions. Pontus Giselsson
Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum
More informationA Simple Proof of the Necessity. of the Transversality Condition
comments appreciated A Simple Proof of the Necessity of the Transversality Condition Takashi Kamihigashi RIEB Kobe University Rokkodai, Nada, Kobe 657-8501 Japan Phone/Fax: +81-78-803-7015 E-mail: tkamihig@rieb.kobe-u.ac.jp
More informationConvexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.
Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed
More informationDeterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle
Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle Takashi Kamihigashi Masayuki Yao March 30, 2015 Abstract We consider infinite-horizon deterministic dynamic programming
More informationLECTURE 2. Convexity and related notions. Last time: mutual information: definitions and properties. Lecture outline
LECTURE 2 Convexity and related notions Last time: Goals and mechanics of the class notation entropy: definitions and properties mutual information: definitions and properties Lecture outline Convexity
More informationLecture 6: Contraction mapping, inverse and implicit function theorems
Lecture 6: Contraction mapping, inverse and implicit function theorems 1 The contraction mapping theorem De nition 11 Let X be a metric space, with metric d If f : X! X and if there is a number 2 (0; 1)
More informationu xx + u yy = 0. (5.1)
Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function
More informationIntroduction to Recursive Methods
Chapter 1 Introduction to Recursive Methods These notes are targeted to advanced Master and Ph.D. students in economics. They can be of some use to researchers in macroeconomic theory. The material contained
More informationLecture 4: Dynamic Programming
Lecture 4: Dynamic Programming Fatih Guvenen January 10, 2016 Fatih Guvenen Lecture 4: Dynamic Programming January 10, 2016 1 / 30 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +
More informationConvex analysis and profit/cost/support functions
Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two
More informationTHE INVERSE FUNCTION THEOREM
THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)
More informationLecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University
Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................
More informationMathematical Economics: Lecture 16
Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave
More informationMathematical Appendix
Ichiro Obara UCLA September 27, 2012 Obara (UCLA) Mathematical Appendix September 27, 2012 1 / 31 Miscellaneous Results 1. Miscellaneous Results This first section lists some mathematical facts that were
More informationEC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1
EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if
More informationg 2 (x) (1/3)M 1 = (1/3)(2/3)M.
COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is
More informationAdvanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications
Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Daron Acemoglu MIT November 19, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 1 / 79 Stochastic
More informationRegularity for Poisson Equation
Regularity for Poisson Equation OcMountain Daylight Time. 4, 20 Intuitively, the solution u to the Poisson equation u= f () should have better regularity than the right hand side f. In particular one expects
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationMeasure and Integration: Solutions of CW2
Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationLecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring
Lecture 3: Hamilton-Jacobi-Bellman Equations Distributional Macroeconomics Part II of ECON 2149 Benjamin Moll Harvard University, Spring 2018 1 Outline 1. Hamilton-Jacobi-Bellman equations in deterministic
More informationOn the Principle of Optimality for Nonstationary Deterministic Dynamic Programming
On the Principle of Optimality for Nonstationary Deterministic Dynamic Programming Takashi Kamihigashi January 15, 2007 Abstract This note studies a general nonstationary infinite-horizon optimization
More informationARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.
ARE211, Fall 2005 LECTURE #18: THU, NOV 3, 2005 PRINT DATE: NOVEMBER 22, 2005 (COMPSTAT2) CONTENTS 5. Characteristics of Functions. 1 5.1. Surjective, Injective and Bijective functions 1 5.2. Homotheticity
More informationLECTURE 15: COMPLETENESS AND CONVEXITY
LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other
More informationLecture Notes on Metric Spaces
Lecture Notes on Metric Spaces Math 117: Summer 2007 John Douglas Moore Our goal of these notes is to explain a few facts regarding metric spaces not included in the first few chapters of the text [1],
More informationRecursive Methods Recursive Methods Nr. 1
Nr. 1 Outline Today s Lecture Dynamic Programming under Uncertainty notation of sequence problem leave study of dynamics for next week Dynamic Recursive Games: Abreu-Pearce-Stachetti Application: today
More informationAnalysis Finite and Infinite Sets The Real Numbers The Cantor Set
Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered
More informationMathematical Analysis Outline. William G. Faris
Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,
More informationLecture 1: Dynamic Programming
Lecture 1: Dynamic Programming Fatih Guvenen November 2, 2016 Fatih Guvenen Lecture 1: Dynamic Programming November 2, 2016 1 / 32 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +
More informationL p Spaces and Convexity
L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function
More informationMonotone comparative statics Finite Data and GARP
Monotone comparative statics Finite Data and GARP Econ 2100 Fall 2017 Lecture 7, September 19 Problem Set 3 is due in Kelly s mailbox by 5pm today Outline 1 Comparative Statics Without Calculus 2 Supermodularity
More informationa = (a 1; :::a i )
1 Pro t maximization Behavioral assumption: an optimal set of actions is characterized by the conditions: max R(a 1 ; a ; :::a n ) C(a 1 ; a ; :::a n ) a = (a 1; :::a n) @R(a ) @a i = @C(a ) @a i The rm
More informationMath 328 Course Notes
Math 328 Course Notes Ian Robertson March 3, 2006 3 Properties of C[0, 1]: Sup-norm and Completeness In this chapter we are going to examine the vector space of all continuous functions defined on the
More informationMathematics II, course
Mathematics II, course 2013-2014 Juan Pablo Rincón Zapatero October 24, 2013 Summary: The course has four parts that we describe below. (I) Topology in Rn is a brief review of the main concepts and properties
More informationProblem List MATH 5143 Fall, 2013
Problem List MATH 5143 Fall, 2013 On any problem you may use the result of any previous problem (even if you were not able to do it) and any information given in class up to the moment the problem was
More informationBerge s Maximum Theorem
Berge s Maximum Theorem References: Acemoglu, Appendix A.6 Stokey-Lucas-Prescott, Section 3.3 Ok, Sections E.1-E.3 Claude Berge, Topological Spaces (1963), Chapter 6 Berge s Maximum Theorem So far, we
More informationIntroductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19
Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each
More informationMicroeconomics, Block I Part 1
Microeconomics, Block I Part 1 Piero Gottardi EUI Sept. 26, 2016 Piero Gottardi (EUI) Microeconomics, Block I Part 1 Sept. 26, 2016 1 / 53 Choice Theory Set of alternatives: X, with generic elements x,
More information