Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.
|
|
- Jane Thompson
- 5 years ago
- Views:
Transcription
1 Paris. Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.
2 Chapter 1: duality theory. Introductory example The diet problem A student wants to prepair a tasty afternoon Two choice of food purchased from a bakery. Brownies (50 cents each) or mini cheese cakes (80 cents each). To simplify, let us say each cake is made of chocolate or sugar or cream-cheese. Moreover, the student wants to do a diet, and he has decided that he has minimal requirements for each ingredient (sugar,...) The following array summarizes everything: Chocolate Sugar CreamCheese UnitCost Brownie Cheesecake Requirements
3 Chapter 1: duality theory. Section 1: introduction Chocolate Sugar CreamCheese UnitCost Brownie Cheesecake Requirements If x 1 is the amount of brownies, x 2 the amount of cheescakes, the problem of the student is the following: under the constraints 3x 1 6, 2x 1 + 4x 2 10, 2x 1 + 5x 2 8, x 1 0, x 2 0. min 50x x 2
4 Chapter 1: duality theory. Section 1: introduction Now, imagine a supplier propose to sell directly the ingredients (chocolate,...) to the bakery. He wants to decide of the unit price of chocolate (y 1 ) of sugar (y 2 ) and of cream cheese (y 3 ) To maximize his revenue when he sells exactly what needs the bakery (6 unit of chocolate,...) Thus, the supplier maximizes 6y y 2 + 8y 3. The constraints are that the prices of each cake, given the price of ingredient, should be below what is ready to pay the student. That is, 3y 1 + 2y 2 + 2y 3 50 and 0y 1 + 4y 2 + 5y 3 80 and finally, y 1, y 2, y 3 0. Chocolate Sugar CreamCheese UnitCost Brownie Cheesecake Requirements
5 Chapter 1: duality theory. Section 1: introduction Definition: closed half-space H of R n : it can be written n H = {x = (x 1,..., x n ) R n : a i x i b} for some reals a i and b. Definition: convex polyhedron: It is a finite intersection of closed half-space of R n. Definition: a face of a convex set C if a convex subset F C such that if x, y, z are distinct points in C such that x = λy + (1 λ)z for some λ [0, 1] and x F, then y, z are in F. Definition: edges of C are faces that can be written [x, y] for some x, y in C (we say also 1 dimensional face). Definition: vertices of C are faces that can be written {x} for some x in C (we say also 0 dimensional face, or also extremal point). Remark: a convex polyhedron is closed, but may not be compact. i=1
6 Chapter 1: duality theory. Section 2: geometry of the problem Theorem: compact polyhedron Every compact and convex polyhedron C of R n can be written for some x 1,..., x n in R n. C = co{x 1,..., x n }
7 Chapter 1: duality theory. Section 2: geometry of the problem Let C be a convex polyhedron. Definition: A supporting hyperplan of C is an affine hyperplane H = {x = (x 1,..., x n ) R n : n i=1 a ix i = b} which intersects C on a face of C. Proposition: If H = {x = (x 1,..., x n ) R n : n i=1 a ix i = b} supporting hyperplan of C, then either: (1) For every x C, a.x b (in this case, we say that a = (a 1,..., a n ) points inward C ) (2) or for every x C, a.x b (in this case, we say that a = (a 1,..., a n ) points outward C )
8 Chapter 1: Duality theory. Section 2: geometry of the problem Theorem Consider the problem max l.x x=(x 1,...,x n) C where C is a convex polyhedron of R n and l = (l 1,..., l n ) R n. Then: (1) either this problem has no solution and the value of the problem is +, (2) or the set of a solution is a face F of C. In this last case, there exists b R such that H = {x = (x 1,..., x n ) R n : n i=1 l ix i = b} is a supporting hyperplan of C, and l = (l 1,..., l n ) points outward C. In particular, if C is a compact and convex polyhedron, that is C = co{x 1,..., x n }, then the set of solutions is nonempty and is a Face of C.
9 Chapter 1: duality theory. Section 2: geometry of the problem Example 1 Maximize x 1 + x 2 such that x 1 + 2x 2 4, 4x 1 + 2x 2 12, x 1 + x 2 1, x 1 0 and x2 0. Example 2 Maximize x 1 + x 2 such that x 1 + x 2 4, 4x 1 + 2x 2 12, x 1 + x 2 1, x 1 0 and x2 0.
10 Chapter 1: duality theory. Section 3: Terminology Definition: An optimization problem if unfeasible if the set of constraint is empty.
11 Chapter 1: duality theory. Section 4: duality theorems Theorem Let A is a m n matrix. Consider (P) we will call the primal problem, and (Q) max i=1,...,m, n j=1 a ij x j b i,x j 0 j=1 min j=1,...,n, m i=1 a ij y i c j,y i 0 i=1 n c j.x j m b i.y i we will call the dual problem. Weak duality property If x = ( x 1,.., x n) is a feasible point of (P), and ȳ = (ȳ 1,.., ȳ n) is a feasible point of (Q), then n j=1 c j. x j m i=1 b i.ȳ i. Strong duality property (admitted) If one optimization problem has a finite optimal solution, then so does has the other, and (Val(P) = Val(Q)). Unboundedness property If one optimization problem has an unbounded solution, then the other one is unfeasible.
12 Chapter 1. Dynamic programming. Finite horizon case You have a cake of length 1. Each day from today, you can consume c t. The more you consume one day, the happier your are this day. But you would like to have some cake tomorrow and the other days...thus, trade off. Mathematically...
13 Chapter 1. Dynamic programming. Finite horizon case you have to share 1 unit of good through time: let x t the share at time t, so that + i=0 x i = 1. Let C the set of sequences (x n ) such that this condition is true. Consider the following maximization problem + max β i u(x i ). (x n) C i=0 Here β ]0, 1[ is a discounted factor, and u a bounded and continuous utility function. Existence of a solution? Computation of the value?
14 Chapter 2: Dynamic programming: finite horizon problem Problem Maximize with respect to a 0 R,..., a T 1 R the function under the constraints. T 1 V (s, a) = ( β t f t(s t, a t)) + β T v T (s T ) t=0 t = 0,..., T 1, s t+1 = g t(s t, a t), s 0 given β discount factor, f t(s t, a t) payoff at t, v T (s T ) terminal payoff at T. g t describes the law of motion (how passing from s t to s t+1. ) s = (s 0, s 1,..., s T ) =state variable, describes the system. e.g. s t=size of the cake at t. a = (a 0, s 1,..., a T 1 )=control variable (or decision variable). e.g. a t= share of the cake you choose at date t.
15 Chapter 2: Dynamic programming: finite horizon problem To simplify problem, we discount the constraints: Problem Maximize with respect to a 0 R,..., a T 1 R the function under the constraints T 1 V (s, a) = ( β t f t(s t, a t)) + β T v T (s T ) t=0 t = 0,..., T 1, β t+1 (s t+1 g t(s t, a t)) = 0 Then we write the first order necessary condition, where λ 1,..., λ T multiplicators associated to constraints t = 0,..., T 1.
16 Chapter 2: Dynamic programming: finite horizon problem If f t, g t C 1, at solution, FOC gives: (2) Euler equation t = 0,..., T 1: (3) t = 1,..., T 1: f t a t (a t, s t) = λ t+1 β gt a t (a t, s t) f t s t (a t, s t) = λ t+1 β gt s t (a t, s t) + λ t (4) (5) t = 0,..., T 1 v s T (a t, s t) = λ T s t+1 = g t(s t, a t)
17 Chapter 2: Dynamic programming: finite horizon problem; INTERPRETATION (2) Euler equation t = 0,..., T 1: f t a t (a t, s t) = λ t+1 β gt a t (a t, s t) Interpretation: if one add 1 to a t (=share of the cake consumed at t=today). positive effect computed with payoff today approximately measured by f t a t (a t, s t). negative effect on utility tomorrow (because less cake!), expressed today, approximately measured by βλ t+1 g t a t (a t, s t) if λ t+1 interpreted as shadow price of cake tomorrow.
18 Chapter 2: Dynamic programming: finite horizon problem: INTERPRETATION (3) t = 1,..., T 1: f t s t (a t, s t) = λ t+1 β gt s t (a t, s t) + λ t Interpretation: if one add 1 to the size of cake s t today i.e. at t. Value of this additional unit can be measured through shadow price λ t of cake at t, which gives an increase of λ t. Value of this additional unit can be measured through additional payoff today plus discounted additional payoff tomorrow, that is f t s t (a t, s t) + λ t+1 β gt s t (a t, s t)
19 Chapter 2: Dynamic programming: finite horizon problem: INTERPRETATION (4) v s T (a T, s T ) = λ T Interpretation: if one add 1 to the size of the cake s T at the final date T Value of this additional unit can be measured through shadow price λ T of cake at T, which gives.value of this additional unit can be measured through additional payoff at T, that is λ T v s T (a T, s T )
20 Chapter 2: Dynamic programming: finite horizon problem: Example 1 Consider a consumer living for two periods t = 0, 1. He derives utility U : [0, + [ [0, + [, assumed C 1, from consuming a good c at each period. The initial endowment of the good is w. The consumer can borrow and lend intertemporally at an interest rate r. Suppose the intertemporal utility function V is separable and stationary, so that V (c 0, c 1 ) = U(c 0 ) + βu(c 1 ) where β stands for the discount rate. Find the optimal consumption across both periods. If one assume that U(.) is strictly concave, find a necessary and sufficient condition such that c 0 > c 1 at optimum.
21 Chapter 2: Dynamic programming: finite horizon problem Problem Maximize with respect to a 0 R,..., a T 1,... R the function + V (s, a) = ( β t f t (s t, a t )) under the constraints s 0 given. t=0 t = 0,..., T 1, s t+1 = g t (s t, a t ) To insure convergence, assume β ]0, 1[ (discount factor). Each f t is bounded.
22 Chapter 3: Dynamic programming: infinite horizon problem Problem Maximize with respect to a 0 R,..., a T 1,... R the function + V (s, a) = ( β t f t (s t, a t )) under the constraints s 0 given. t=0 t = 0,..., T 1, s t+1 = g t (s t, a t ) To solve it, method: write the FOC between two periods t and t + 1. This gives in general optimal solutions a. We shall see another method with Bellman principle.
23 Chapter 4: Dynamic programming: Bellman principle Recall the previous problem:
24 Chapter 4: Dynamic programming: Bellman principle Problem Maximize with respect to a 0 A,..., a T 1 A (where A set of actions) the function under the constraints T 1 V (s, a) = ( β t f t (s t, a t )) + β T v T (s T ) t=0 t = 0,..., T 1, s t+1 = g t (s t, a t ), s 0 given To avoid technical cases, assume each f t is bounded. Recall we maximize with respect to actions for which there exists some states satisfying the feasibility conditions above.
25 Chapter 4: Dynamic programming: Bellman principle Problem Maximize with respect to a 0 A,..., a T 1 A (where A set of actions) the function under the constraints T 1 V (s, a) = ( β t f t (s t, a t )) + β T v T (s T ) t=0 t = 0,..., T 1, s t+1 = g t (s t, a t ), s 0 given To avoid technical cases, assume each f t is bounded. Recall we maximize with respect to actions for which there exists some states satisfying the feasibility conditions above.
26 Chapter 4: Dynamic programming: Bellman principle Problem Maximize with respect to a 0 A,..., a T 1 A (where A set of actions) the function under the constraints T 1 V (s, a) = ( β t f t (s t, a t )) + β T v T (s T ) t=0 t = 0,..., T 1, s t+1 = g t (s t, a t ), s 0 given To avoid technical cases, assume each f t is bounded. Recall we maximize with respect to actions for which there exists some states satisfying the feasibility conditions above.
27 Chapter 4: Dynamic programming: Bellman principle For every state s S (set of states) and time t {0, 1,..., T 1}, let us define T 1 V t (s) = max( β k t f k (s k, a k )) + β T t v T (s T ) k=t Where the maximization is with respect to a t, a t+1,..., a T 1 for which there exists s t+1,..., s T such that k = t,..., T 1, s k+1 = g k (s k, a k ), s t = s given In particular, we are looking for V 0 (s 0 ). Idea: 1. V T 1 (s) easy to compute for every s S (by convention, if not feasible). 2. Then we can compute V T 2 (.) from V T 1 (s), and V T 3 (.) from V T 2 (s), etc etc... by induction, thanks to the following Bellman principle:
28 Chapter 4: Dynamic programming: Bellman principle For every state s S (set of states) and time t {0, 1,..., T 1}, let us define T 1 V t (s) = max( β k t f k (s k, a k )) + β T t v T (s T ) k=t Where the maximization is with respect to a t, a t+1,..., a T 1 for which there exists s t+1,..., s T such that k = t,..., T 1, s k+1 = g k (s k, a k ), s t = s given In particular, we are looking for V 0 (s 0 ). Idea: 1. V T 1 (s) easy to compute for every s S (by convention, if not feasible). 2. Then we can compute V T 2 (.) from V T 1 (s), and V T 3 (.) from V T 2 (s), etc etc... by induction, thanks to the following Bellman principle:
29 Chapter 4: Dynamic programming: Bellman principle For every state s S (set of states) and time t {0, 1,..., T 1}, let us define T 1 V t (s) = max( β k t f k (s k, a k )) + β T t v T (s T ) k=t Where the maximization is with respect to a t, a t+1,..., a T 1 for which there exists s t+1,..., s T such that k = t,..., T 1, s k+1 = g k (s k, a k ), s t = s given In particular, we are looking for V 0 (s 0 ). Idea: 1. V T 1 (s) easy to compute for every s S (by convention, if not feasible). 2. Then we can compute V T 2 (.) from V T 1 (s), and V T 3 (.) from V T 2 (s), etc etc... by induction, thanks to the following Bellman principle:
30 Chapter 4: Dynamic programming: Bellman principle For every state s S (set of states) and time t {0, 1,..., T 1}, let us define T 1 V t (s) = max( β k t f k (s k, a k )) + β T t v T (s T ) k=t Where the maximization is with respect to a t, a t+1,..., a T 1 for which there exists s t+1,..., s T such that k = t,..., T 1, s k+1 = g k (s k, a k ), s t = s given In particular, we are looking for V 0 (s 0 ). Idea: 1. V T 1 (s) easy to compute for every s S (by convention, if not feasible). 2. Then we can compute V T 2 (.) from V T 1 (s), and V T 3 (.) from V T 2 (s), etc etc... by induction, thanks to the following Bellman principle:
31 Chapter 4: Dynamic programming: Bellman principle For every state s S (set of states) and time t {0, 1,..., T 1}, let us define T 1 V t (s) = max( β k t f k (s k, a k )) + β T t v T (s T ) k=t Where the maximization is with respect to a t, a t+1,..., a T 1 for which there exists s t+1,..., s T such that k = t,..., T 1, s k+1 = g k (s k, a k ), s t = s given In particular, we are looking for V 0 (s 0 ). Idea: 1. V T 1 (s) easy to compute for every s S (by convention, if not feasible). 2. Then we can compute V T 2 (.) from V T 1 (s), and V T 3 (.) from V T 2 (s), etc etc... by induction, thanks to the following Bellman principle:
32 Chapter 4: Dynamic programming: Bellman principle For every state s S (set of states) and time t {0, 1,..., T 1}, let us define T 1 V t (s) = max( β k t f k (s k, a k )) + β T t v T (s T ) k=t Where the maximization is with respect to a t, a t+1,..., a T 1 for which there exists s t+1,..., s T such that k = t,..., T 1, s k+1 = g k (s k, a k ), s t = s given In particular, we are looking for V 0 (s 0 ). Idea: 1. V T 1 (s) easy to compute for every s S (by convention, if not feasible). 2. Then we can compute V T 2 (.) from V T 1 (s), and V T 3 (.) from V T 2 (s), etc etc... by induction, thanks to the following Bellman principle:
33 Chapter 4: Dynamic programming: Bellman principle Bellman Principle For every time t {1, 2,..., T 1}, and every state s S (at time t 1), one has: V t 1 (s) = max a t 1 A {f t 1(s, a t 1 )) + βv t (g t 1 (s, a t 1 )} Remark also that Proof. V T 1 (s) = max a T 1 A {f T 1(s, a T 1 )) + βv T (g T 1 (s, a T 1 )}
34 Chapter 4: Dynamic programming: Bellman principle Bellman Principle For every time t {1, 2,..., T 1}, and every state s S (at time t 1), one has: V t 1 (s) = max a t 1 A {f t 1(s, a t 1 )) + βv t (g t 1 (s, a t 1 )} Remark also that Proof. V T 1 (s) = max a T 1 A {f T 1(s, a T 1 )) + βv T (g T 1 (s, a T 1 )}
35 Chapter 4: Dynamic programming: Bellman principle Bellman Principle For every time t {1, 2,..., T 1}, and every state s S (at time t 1), one has: V t 1 (s) = max a t 1 A {f t 1(s, a t 1 )) + βv t (g t 1 (s, a t 1 )} Remark also that Proof. V T 1 (s) = max a T 1 A {f T 1(s, a T 1 )) + βv T (g T 1 (s, a T 1 )}
36 Chapter 4: Dynamic programming: Bellman principle Bellman Principle For every time t {1, 2,..., T 1}, and every state s S (at time t 1), one has: V t 1 (s) = max a t 1 A {f t 1(s, a t 1 )) + βv t (g t 1 (s, a t 1 )} Remark also that Proof. V T 1 (s) = max a T 1 A {f T 1(s, a T 1 )) + βv T (g T 1 (s, a T 1 )}
37 Chapter 4: Dynamic programming: Bellman principle Consider max a 0,a 1,a 2,a 3 3 (10s t (0, 1)at 2 ) t=0 under the constraint s t+1 = s t + a t, s 0 = 0, a t 0. Here, a t investment at t, s t capital stock at t. We use Bellman optimal principle method : we begin to solve the problem till the end, and we do backward induction. At t = 3: given s 3, we solve max a 3 0 (10s 3 (0, 1)a 2 3 ). gives a 3 = 0 and v 3(s 3 ) = 10s 3 optimal value at t = 3 given s 3.
38 Chapter 4: Dynamic programming: Bellman principle Consider max a 0,a 1,a 2,a 3 3 (10s t (0, 1)at 2 ) t=0 under the constraint s t+1 = s t + a t, s 0 = 0, a t 0. Here, a t investment at t, s t capital stock at t. We use Bellman optimal principle method : we begin to solve the problem till the end, and we do backward induction. At t = 3: given s 3, we solve max a 3 0 (10s 3 (0, 1)a 2 3 ). gives a 3 = 0 and v 3(s 3 ) = 10s 3 optimal value at t = 3 given s 3.
39 Chapter 4: Dynamic programming: Bellman principle Consider max a 0,a 1,a 2,a 3 3 (10s t (0, 1)at 2 ) t=0 under the constraint s t+1 = s t + a t, s 0 = 0, a t 0. Here, a t investment at t, s t capital stock at t. We use Bellman optimal principle method : we begin to solve the problem till the end, and we do backward induction. At t = 3: given s 3, we solve max a 3 0 (10s 3 (0, 1)a 2 3 ). gives a 3 = 0 and v 3(s 3 ) = 10s 3 optimal value at t = 3 given s 3.
40 Chapter 4: Dynamic programming: Bellman principle At t = 2: given s 2, we solve max a s 2 (0, 1)a 2 2 +v 3(s 3 ) = 10s 2 (0, 1)a (s 2 +a 2 ). gives a2 = 50 and v 2 (s 2 ) = 10.s 2 (0, 1) (s ) = 20s optimal value at t = 2 given s 2. At t = 1: given s 1, we solve max 10s 1 (0, 1)a1 2 +v 2(s 2 ) = 10s 1 (0, 1)a (s 1+a 1 )+250 a 1 0 gives a 1 = 100 and v 1(s 1 ) = 30s optimal value at t = 1 given s 1. At t = 0: given s 1, we solve max 10s 0 (0, 1)a0 2 +v 1(s 1 ) = 10s 0 (0, 1)a s a 0 0 gives a 0 = 150 and v 1(s 0 ) = 3500.
41 Chapter 5: Dynamic programming: Section 1:Bellman method with infinite horizon and stationary assumptions Problem Assume f is a bounded function (to simplify), and β ]0, 1[. For an initial state s 0 S (set of states), denote V (s 0 ) the value of the following problem, where the variable is some real sequence a = (a t ) t 0 in A (set of actions): V (s 0 ) = under the constraints + sup β t f (s t, a t ) (a k ) k 0,(s k ) k 0 t=0 t 0, s t+1 = g(s t, a t ), s 0 given
42 Chapter 5: Dynamic programming: Section 1:Bellman method with infinite horizon and stationary assumptions infinite horizon Bellman Principle For every state s S, one has Proof. V (s) = sup{f (s, a)) + βv (g(s, a))} a A
43 Chapter 5: Dynamic programming: Section 2 : the Fixed-point method The fixed point method For every function V (.) from S to R, define B(V )(s) = max{f (s, a)) + βv (g(s, a))}. a By infinite horizon Bellman principle, B(V ) = V. Thus, we now raise the following problem: method to prove and construct some operator B (i.e. a function that associates to some function another function!) admits a fixed point.
44 Chapter 5: Dynamic programming: Section 2 : the Fixed-point method The fixed point method For every function V (.) from S to R, define B(V )(s) = max{f (s, a)) + βv (g(s, a))}. a By infinite horizon Bellman principle, B(V ) = V. Thus, we now raise the following problem: method to prove and construct some operator B (i.e. a function that associates to some function another function!) admits a fixed point.
45 Chapter 5: Dynamic programming: Section 2 : the Fixed-point method The fixed point method For every function V (.) from S to R, define B(V )(s) = max{f (s, a)) + βv (g(s, a))}. a By infinite horizon Bellman principle, B(V ) = V. Thus, we now raise the following problem: method to prove and construct some operator B (i.e. a function that associates to some function another function!) admits a fixed point.
46 Chapter 5: Dynamic programming: Section 2 : the Fixed-point method The fixed point method For every function V (.) from S to R, define B(V )(s) = max{f (s, a)) + βv (g(s, a))}. a By infinite horizon Bellman principle, B(V ) = V. Thus, we now raise the following problem: method to prove and construct some operator B (i.e. a function that associates to some function another function!) admits a fixed point.
47 Chapter 5: Dynamic programming: Section 2 : the Fixed-point method The fixed point method For every function V (.) from S to R, define B(V )(s) = max{f (s, a)) + βv (g(s, a))}. a By infinite horizon Bellman principle, B(V ) = V. Thus, we now raise the following problem: method to prove and construct some operator B (i.e. a function that associates to some function another function!) admits a fixed point.
48 Chapter 5: Dynamic programming: Section 2 : the Fixed-point method The fixed point method For every function V (.) from S to R, define B(V )(s) = max{f (s, a)) + βv (g(s, a))}. a By infinite horizon Bellman principle, B(V ) = V. Thus, we now raise the following problem: method to prove and construct some operator B (i.e. a function that associates to some function another function!) admits a fixed point.
49 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Norme. sur E, un IR-e.v.:. application de E dans IR + ; pour tout x E, x = 0 si et seulement si x = 0; pour tout x E, pour tout t IR, tx = t x ; pour tout (x, y) E E, x + y x + y.
50 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Norme. sur E, un IR-e.v.:. application de E dans IR + ; pour tout x E, x = 0 si et seulement si x = 0; pour tout x E, pour tout t IR, tx = t x ; pour tout (x, y) E E, x + y x + y.
51 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Norme. sur E, un IR-e.v.:. application de E dans IR + ; pour tout x E, x = 0 si et seulement si x = 0; pour tout x E, pour tout t IR, tx = t x ; pour tout (x, y) E E, x + y x + y.
52 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Norme. sur E, un IR-e.v.:. application de E dans IR + ; pour tout x E, x = 0 si et seulement si x = 0; pour tout x E, pour tout t IR, tx = t x ; pour tout (x, y) E E, x + y x + y.
53 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Norme. sur E, un IR-e.v.:. application de E dans IR + ; pour tout x E, x = 0 si et seulement si x = 0; pour tout x E, pour tout t IR, tx = t x ; pour tout (x, y) E E, x + y x + y.
54 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem The space E endowed with a Norm. is said to be a Banach space if it is complete, i.e. every Cauchy sequence converges, that is... Example 1: every finite dimensional vector space endowed with any norm is a Banach space. Example 2: norm l p on l p, p = 1,..., +. Example 3: norm on the set of continuous functions from some compact set K to R. Proof for Example 3 (Important).
55 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem The space E endowed with a Norm. is said to be a Banach space if it is complete, i.e. every Cauchy sequence converges, that is... Example 1: every finite dimensional vector space endowed with any norm is a Banach space. Example 2: norm l p on l p, p = 1,..., +. Example 3: norm on the set of continuous functions from some compact set K to R. Proof for Example 3 (Important).
56 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem The space E endowed with a Norm. is said to be a Banach space if it is complete, i.e. every Cauchy sequence converges, that is... Example 1: every finite dimensional vector space endowed with any norm is a Banach space. Example 2: norm l p on l p, p = 1,..., +. Example 3: norm on the set of continuous functions from some compact set K to R. Proof for Example 3 (Important).
57 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem The space E endowed with a Norm. is said to be a Banach space if it is complete, i.e. every Cauchy sequence converges, that is... Example 1: every finite dimensional vector space endowed with any norm is a Banach space. Example 2: norm l p on l p, p = 1,..., +. Example 3: norm on the set of continuous functions from some compact set K to R. Proof for Example 3 (Important).
58 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Definition A function f : E E is said to be contracting if there exists k ]0, 1[ such that (x, y) E E : f (x) f (y) k x y. Theorem (Banach-Picard) Let (E,. ) be a Banach space and f : E E a contracting function. Then: (i) there exists some fixed point x E of f, that is f ( x) = x. (ii) For every x 0 E fixed, the sequence defined inductively by x n+1 = f (x n ) converges to x. Proof.
59 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Definition A function f : E E is said to be contracting if there exists k ]0, 1[ such that (x, y) E E : f (x) f (y) k x y. Theorem (Banach-Picard) Let (E,. ) be a Banach space and f : E E a contracting function. Then: (i) there exists some fixed point x E of f, that is f ( x) = x. (ii) For every x 0 E fixed, the sequence defined inductively by x n+1 = f (x n ) converges to x. Proof.
60 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Definition A function f : E E is said to be contracting if there exists k ]0, 1[ such that (x, y) E E : f (x) f (y) k x y. Theorem (Banach-Picard) Let (E,. ) be a Banach space and f : E E a contracting function. Then: (i) there exists some fixed point x E of f, that is f ( x) = x. (ii) For every x 0 E fixed, the sequence defined inductively by x n+1 = f (x n ) converges to x. Proof.
61 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Definition A function f : E E is said to be contracting if there exists k ]0, 1[ such that (x, y) E E : f (x) f (y) k x y. Theorem (Banach-Picard) Let (E,. ) be a Banach space and f : E E a contracting function. Then: (i) there exists some fixed point x E of f, that is f ( x) = x. (ii) For every x 0 E fixed, the sequence defined inductively by x n+1 = f (x n ) converges to x. Proof.
62 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Definition A function f : E E is said to be contracting if there exists k ]0, 1[ such that (x, y) E E : f (x) f (y) k x y. Theorem (Banach-Picard) Let (E,. ) be a Banach space and f : E E a contracting function. Then: (i) there exists some fixed point x E of f, that is f ( x) = x. (ii) For every x 0 E fixed, the sequence defined inductively by x n+1 = f (x n ) converges to x. Proof.
63 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Theorem The Banach-Picard theorem is true if we only assume that some iterate f k is contracting for some k 1. Proof.
64 Chapter 5: Dynamic programming: Section 3 : the Banach-Picard fixed-point theorem Theorem The Banach-Picard theorem is true if we only assume that some iterate f k is contracting for some k 1. Proof.
65 Chapter 5: Dynamic programming: Section 4: Blackwell theorem Theorem Let B(X), endowed with., the set of bounded functions from some nonempty subset X. Let L some closed (for the norm) vector subspace of B(X) containing every constant functions. Let T : L L such that: i) T is increasing, in the sense: f g implies T (f ) T (g). ii) There exists β ]0, 1[ such that for every constant function c, T (f + c) T (f ) + β.c for every f L. Then T admits a fixed point. Here f g means f (x) g(x) for every x. The same for T (f ) T (g).
66 Chapter 5: Dynamic programming: Section 4: Blackwell theorem Theorem Let B(X), endowed with., the set of bounded functions from some nonempty subset X. Let L some closed (for the norm) vector subspace of B(X) containing every constant functions. Let T : L L such that: i) T is increasing, in the sense: f g implies T (f ) T (g). ii) There exists β ]0, 1[ such that for every constant function c, T (f + c) T (f ) + β.c for every f L. Then T admits a fixed point. Here f g means f (x) g(x) for every x. The same for T (f ) T (g).
67 Chapter 5: Dynamic programming: Section 4: Blackwell theorem Theorem Let B(X), endowed with., the set of bounded functions from some nonempty subset X. Let L some closed (for the norm) vector subspace of B(X) containing every constant functions. Let T : L L such that: i) T is increasing, in the sense: f g implies T (f ) T (g). ii) There exists β ]0, 1[ such that for every constant function c, T (f + c) T (f ) + β.c for every f L. Then T admits a fixed point. Here f g means f (x) g(x) for every x. The same for T (f ) T (g).
68 Chapter 5: Dynamic programming: Section 4: Blackwell theorem Theorem Let B(X), endowed with., the set of bounded functions from some nonempty subset X. Let L some closed (for the norm) vector subspace of B(X) containing every constant functions. Let T : L L such that: i) T is increasing, in the sense: f g implies T (f ) T (g). ii) There exists β ]0, 1[ such that for every constant function c, T (f + c) T (f ) + β.c for every f L. Then T admits a fixed point. Here f g means f (x) g(x) for every x. The same for T (f ) T (g).
69 Chapter 5: Dynamic programming: Section 4: Blackwell theorem Theorem Let B(X), endowed with., the set of bounded functions from some nonempty subset X. Let L some closed (for the norm) vector subspace of B(X) containing every constant functions. Let T : L L such that: i) T is increasing, in the sense: f g implies T (f ) T (g). ii) There exists β ]0, 1[ such that for every constant function c, T (f + c) T (f ) + β.c for every f L. Then T admits a fixed point. Here f g means f (x) g(x) for every x. The same for T (f ) T (g).
70 Chapter 5: Dynamic programming: Section 4: Blackwell theorem Theorem Let B(X), endowed with., the set of bounded functions from some nonempty subset X. Let L some closed (for the norm) vector subspace of B(X) containing every constant functions. Let T : L L such that: i) T is increasing, in the sense: f g implies T (f ) T (g). ii) There exists β ]0, 1[ such that for every constant function c, T (f + c) T (f ) + β.c for every f L. Then T admits a fixed point. Here f g means f (x) g(x) for every x. The same for T (f ) T (g).
71 Chapter 5: Dynamic programming: Section 4: Blackwell theorem Theorem Let B(X), endowed with., the set of bounded functions from some nonempty subset X. Let L some closed (for the norm) vector subspace of B(X) containing every constant functions. Let T : L L such that: i) T is increasing, in the sense: f g implies T (f ) T (g). ii) There exists β ]0, 1[ such that for every constant function c, T (f + c) T (f ) + β.c for every f L. Then T admits a fixed point. Here f g means f (x) g(x) for every x. The same for T (f ) T (g).
72 Chapter 5: Dynamic programming: Section 6: Existence of a solution of Bellman equation The equation B(V )(s) = max{f (s, a)) + βv (g(s, a)) is called Bellman equation. a Theorem (Existence) Assume S (state space) and A (action space) are compact, that f and g are continuous. Let L be the space of continuous functions from S to R, and T = B the Bellman operator. Then T satisfies the assumptions of Blackwell principle. In particular, there exists V L such that B(V ) = V.
73 Chapter 5: Dynamic programming: Section 6: Existence of a solution of Bellman equation The equation B(V )(s) = max{f (s, a)) + βv (g(s, a)) is called Bellman equation. a Theorem (Existence) Assume S (state space) and A (action space) are compact, that f and g are continuous. Let L be the space of continuous functions from S to R, and T = B the Bellman operator. Then T satisfies the assumptions of Blackwell principle. In particular, there exists V L such that B(V ) = V.
74 Chapter 5: Dynamic programming: Section 6: Existence of a solution of Bellman equation What can we do from a solution V of bellman equation? Let V a solution of B(V ) = V. Do we have for every state s 0 S, + V (s 0 ) = sup β t f (s t, a t ) a,s t=0 under the constraints t 0, s t+1 = g(s t, a t ), s 0 given Let assume f continuous and S and A compact.
75 Chapter 5: Dynamic programming: Section 6: Existence of a solution of Bellman equation What can we do from a solution V of bellman equation? Let V a solution of B(V ) = V. Do we have for every state s 0 S, + V (s 0 ) = sup β t f (s t, a t ) a,s t=0 under the constraints t 0, s t+1 = g(s t, a t ), s 0 given Let assume f continuous and S and A compact.
Dynamic Programming Theorems
Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember
More informationDYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION
DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal
More informationECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko
ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption
More informationBasic Deterministic Dynamic Programming
Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP
More informationParis. Optimization. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, Philippe Bich
Paris. Optimization. (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2017. Lecture 1: About Optimization A For QEM-MMEF, the course (3H each week) and tutorial (4 hours each week) from now to october 22. Exam
More informationSlides II - Dynamic Programming
Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian
More informationLecture 5: The Bellman Equation
Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think
More informationMacro 1: Dynamic Programming 1
Macro 1: Dynamic Programming 1 Mark Huggett 2 2 Georgetown September, 2016 DP Warm up: Cake eating problem ( ) max f 1 (y 1 ) + f 2 (y 2 ) s.t. y 1 + y 2 100, y 1 0, y 2 0 1. v 1 (x) max f 1(y 1 ) + f
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationDynamic Problem Set 1 Solutions
Dynamic Problem Set 1 Solutions Jonathan Kreamer July 15, 2011 Question 1 Consider the following multi-period optimal storage problem: An economic agent imizes: c t} T β t u(c t ) (1) subject to the period-by-period
More informationFinal Exam - Math Camp August 27, 2014
Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue
More informationII. Analysis of Linear Programming Solutions
Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois
More informationAn Application to Growth Theory
An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x
More informationUniversity of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of
More informationUncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6
1 Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 A Two-Period Example Suppose the economy lasts only two periods, t =0, 1. The uncertainty arises in the income (wage) of period 1. Not that
More informationContents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57
T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic
More informationChapter 2: Preliminaries and elements of convex analysis
Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15
More informationOptimization, for QEM-IMAEF-MMEF, this is the First Part (september-november). For MAEF, it corresponds to the whole course of the semester.
Paris. Optimization, for QEM-IMAEF-MMEF, this is the First Part (september-november). For MAEF, it corresponds to the whole course of the semester. (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016. Why
More information1 Jan 28: Overview and Review of Equilibrium
1 Jan 28: Overview and Review of Equilibrium 1.1 Introduction What is an equilibrium (EQM)? Loosely speaking, an equilibrium is a mapping from environments (preference, technology, information, market
More informationECE 307- Techniques for Engineering Decisions
ECE 307- Techniques for Engineering Decisions Lecture 4. Dualit Concepts in Linear Programming George Gross Department of Electrical and Computer Engineering Universit of Illinois at Urbana-Champaign DUALITY
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationLec12p1, ORF363/COS323. The idea behind duality. This lecture
Lec12 Page 1 Lec12p1, ORF363/COS323 This lecture Linear programming duality + robust linear programming Intuition behind the derivation of the dual Weak and strong duality theorems Max-flow=Min-cut Primal/dual
More information1 Review of last lecture and introduction
Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first
More informationNumerical Optimization
Linear Programming Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on min x s.t. Transportation Problem ij c ijx ij 3 j=1 x ij a i, i = 1, 2 2 i=1 x ij
More informationADVANCED MACROECONOMIC TECHNIQUES NOTE 3a
316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important
More informationMacroeconomic Theory and Analysis Suggested Solution for Midterm 1
Macroeconomic Theory and Analysis Suggested Solution for Midterm February 25, 2007 Problem : Pareto Optimality The planner solves the following problem: u(c ) + u(c 2 ) + v(l ) + v(l 2 ) () {c,c 2,l,l
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationLagrangian Duality and Convex Optimization
Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?
More informationCONSUMER DEMAND. Consumer Demand
CONSUMER DEMAND KENNETH R. DRIESSEL Consumer Demand The most basic unit in microeconomics is the consumer. In this section we discuss the consumer optimization problem: The consumer has limited wealth
More informationLecture 4: Optimization. Maximizing a function of a single variable
Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationDynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.
Discrete Math Camp Optimization Problem Graduate School of Economics, University of Tokyo April 2, 2016 Goal of day 4 Discrete We discuss methods both in discrete and continuous : Discrete : condition
More informationDynamic Optimization Using Lagrange Multipliers
Dynamic Optimization Using Lagrange Multipliers Barbara Annicchiarico barbara.annicchiarico@uniroma2.it Università degli Studi di Roma "Tor Vergata" Presentation #2 Deterministic Infinite-Horizon Ramsey
More informationECON 2010c Solution to Problem Set 1
ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following
More informationChapter 1: Linear Programming
Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationLecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 81 Lecture Notes 10: Dynamic Programming Peter J. Hammond 2018 September 28th University of Warwick, EC9A0 Maths for Economists Peter
More informationExhaustible Resources and Economic Growth
Exhaustible Resources and Economic Growth Cuong Le Van +, Katheline Schubert + and Tu Anh Nguyen ++ + Université Paris 1 Panthéon-Sorbonne, CNRS, Paris School of Economics ++ Université Paris 1 Panthéon-Sorbonne,
More informationSHORT INTRODUCTION TO DYNAMIC PROGRAMMING. 1. Example
SHORT INTRODUCTION TO DYNAMIC PROGRAMMING. Example We consider different stages (discrete time events) given by k = 0,..., N. Let x k be the amount of money owned by a consumer at the stage k. At each
More informationG Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic programming and applications
G6215.1 - Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic Contents 1 The Ramsey growth model with technological progress 2 1.1 A useful lemma..................................................
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationGame Theory and its Applications to Networks - Part I: Strict Competition
Game Theory and its Applications to Networks - Part I: Strict Competition Corinne Touati Master ENS Lyon, Fall 200 What is Game Theory and what is it for? Definition (Roger Myerson, Game Theory, Analysis
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationA REVIEW OF OPTIMIZATION
1 OPTIMAL DESIGN OF STRUCTURES (MAP 562) G. ALLAIRE December 17th, 2014 Department of Applied Mathematics, Ecole Polytechnique CHAPTER III A REVIEW OF OPTIMIZATION 2 DEFINITIONS Let V be a Banach space,
More informationThe Principle of Optimality
The Principle of Optimality Sequence Problem and Recursive Problem Sequence problem: Notation: V (x 0 ) sup {x t} β t F (x t, x t+ ) s.t. x t+ Γ (x t ) x 0 given t () Plans: x = {x t } Continuation plans
More informationA Shadow Simplex Method for Infinite Linear Programs
A Shadow Simplex Method for Infinite Linear Programs Archis Ghate The University of Washington Seattle, WA 98195 Dushyant Sharma The University of Michigan Ann Arbor, MI 48109 May 25, 2009 Robert L. Smith
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More informationMathematical Methods in Economics (Part I) Lecture Note
Mathematical Methods in Economics (Part I) Lecture Note Kai Hao Yang 09/03/2018 Contents 1 Basic Topology and Linear Algebra 4 1.1 Review of Metric Space and Introduction of Topological Space........ 4
More informationOPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM
OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More informationWinter Lecture 10. Convexity and Concavity
Andrew McLennan February 9, 1999 Economics 5113 Introduction to Mathematical Economics Winter 1999 Lecture 10 Convexity and Concavity I. Introduction A. We now consider convexity, concavity, and the general
More informationIntroduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 3 Dr. Ted Ralphs IE406 Lecture 3 1 Reading for This Lecture Bertsimas 2.1-2.2 IE406 Lecture 3 2 From Last Time Recall the Two Crude Petroleum example.
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationOPTIMISATION /09 EXAM PREPARATION GUIDELINES
General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and
More informationConvex Sets. Prof. Dan A. Simovici UMB
Convex Sets Prof. Dan A. Simovici UMB 1 / 57 Outline 1 Closures, Interiors, Borders of Sets in R n 2 Segments and Convex Sets 3 Properties of the Class of Convex Sets 4 Closure and Interior Points of Convex
More informationConvex Sets and Functions (part III)
Convex Sets and Functions (part III) Prof. Dan A. Simovici UMB 1 / 14 Outline 1 The epigraph and hypograph of functions 2 Constructing Convex Functions 2 / 14 The epigraph and hypograph of functions Definition
More informationOPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES
General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.
More informationEconomics 2010c: Lectures 9-10 Bellman Equation in Continuous Time
Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More information1 Stochastic Dynamic Programming
1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future
More informationBest approximations in normed vector spaces
Best approximations in normed vector spaces Mike de Vries 5699703 a thesis submitted to the Department of Mathematics at Utrecht University in partial fulfillment of the requirements for the degree of
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationChapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)
Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3
More informationarxiv: v1 [math.oc] 21 Mar 2015
Convex KKM maps, monotone operators and Minty variational inequalities arxiv:1503.06363v1 [math.oc] 21 Mar 2015 Marc Lassonde Université des Antilles, 97159 Pointe à Pitre, France E-mail: marc.lassonde@univ-ag.fr
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationExample I: Capital Accumulation
1 Example I: Capital Accumulation Time t = 0, 1,..., T < Output y, initial output y 0 Fraction of output invested a, capital k = ay Transition (production function) y = g(k) = g(ay) Reward (utility of
More informationUTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING
UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING J. TEICHMANN Abstract. We introduce the main concepts of duality theory for utility optimization in a setting of finitely many economic scenarios. 1. Utility
More informationStochastic Dynamic Programming: The One Sector Growth Model
Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31 References
More informationIntroduction to Functional Analysis
Introduction to Functional Analysis Carnegie Mellon University, 21-640, Spring 2014 Acknowledgements These notes are based on the lecture course given by Irene Fonseca but may differ from the exact lecture
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationCompetitive Equilibrium and the Welfare Theorems
Competitive Equilibrium and the Welfare Theorems Craig Burnside Duke University September 2010 Craig Burnside (Duke University) Competitive Equilibrium September 2010 1 / 32 Competitive Equilibrium and
More informationZero sum games Proving the vn theorem. Zero sum games. Roberto Lucchetti. Politecnico di Milano
Politecnico di Milano General form Definition A two player zero sum game in strategic form is the triplet (X, Y, f : X Y R) f (x, y) is what Pl1 gets from Pl2, when they play x, y respectively. Thus g
More informationEND3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur
END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More informationLecture 2 The Centralized Economy
Lecture 2 The Centralized Economy Economics 5118 Macroeconomic Theory Kam Yu Winter 2013 Outline 1 Introduction 2 The Basic DGE Closed Economy 3 Golden Rule Solution 4 Optimal Solution The Euler Equation
More informationMAT-INF4110/MAT-INF9110 Mathematical optimization
MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:
More informationDeterministic Dynamic Programming
Chapter 3 Deterministic Dynamic Programming 3.1 The Bellman Principle of Optimality Richard Bellman (1957) states his Principle of Optimality in full generality as follows: An optimal policy has the property
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationTotal Expected Discounted Reward MDPs: Existence of Optimal Policies
Total Expected Discounted Reward MDPs: Existence of Optimal Policies Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony Brook, NY 11794-3600
More informationLecture 10. Dynamic Programming. Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017
Lecture 10 Dynamic Programming Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017 Universidad de Costa Rica EC3201 - Teoría Macroeconómica 2 Table of contents 1. Introduction 2.
More informationEconomics 8105 Macroeconomic Theory Recitation 3
Economics 8105 Macroeconomic Theory Recitation 3 Conor Ryan September 20th, 2016 Outline: Minnesota Economics Lecture Problem Set 1 Midterm Exam Fit Growth Model into SLP Corollary of Contraction Mapping
More informationLecture 4: The Bellman Operator Dynamic Programming
Lecture 4: The Bellman Operator Dynamic Programming Jeppe Druedahl Department of Economics 15th of February 2016 Slide 1/19 Infinite horizon, t We know V 0 (M t ) = whatever { } V 1 (M t ) = max u(m t,
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationThursday, May 24, Linear Programming
Linear Programming Linear optimization problems max f(x) g i (x) b i x j R i =1,...,m j =1,...,n Optimization problem g i (x) f(x) When and are linear functions Linear Programming Problem 1 n max c x n
More informationLecture 6: Discrete-Time Dynamic Optimization
Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,
More informationNeoclassical Growth Model / Cake Eating Problem
Dynamic Optimization Institute for Advanced Studies Vienna, Austria by Gabriel S. Lee February 1-4, 2008 An Overview and Introduction to Dynamic Programming using the Neoclassical Growth Model and Cake
More informationOutline Today s Lecture
Outline Today s Lecture finish Euler Equations and Transversality Condition Principle of Optimality: Bellman s Equation Study of Bellman equation with bounded F contraction mapping and theorem of the maximum
More informationSelçuk Demir WS 2017 Functional Analysis Homework Sheet
Selçuk Demir WS 2017 Functional Analysis Homework Sheet 1. Let M be a metric space. If A M is non-empty, we say that A is bounded iff diam(a) = sup{d(x, y) : x.y A} exists. Show that A is bounded iff there
More information[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ]
Homework #2 Economics 4- Due Wednesday, October 5 Christiano. This question is designed to illustrate Blackwell's Theorem, Theorem 3.3 on page 54 of S-L. That theorem represents a set of conditions that
More informationWORKING PAPER NO. 443
WORKING PAPER NO. 443 Cones with Semi-interior Points and Equilibrium Achille Basile, Maria Gabriella Graziano Maria Papadaki and Ioannis A. Polyrakis May 2016 University of Naples Federico II University
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More information(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε
1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional
More informationLecture 1: Background on Convex Analysis
Lecture 1: Background on Convex Analysis John Duchi PCMI 2016 Outline I Convex sets 1.1 Definitions and examples 2.2 Basic properties 3.3 Projections onto convex sets 4.4 Separating and supporting hyperplanes
More informationLecture 2 The Centralized Economy: Basic features
Lecture 2 The Centralized Economy: Basic features Leopold von Thadden University of Mainz and ECB (on leave) Advanced Macroeconomics, Winter Term 2013 1 / 41 I Motivation This Lecture introduces the basic
More information