CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

Size: px
Start display at page:

Download "CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73"

Transcription

1 CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73

2 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly state variables. Examples: g(u,t) 0; g(x,u,t) 0. Pure Inequality Constraints of the type h(x,t) 0, i.e., involving only state variables, will be treated in Chapter 4. p. 2/73

3 PROBLEMS WITH MIXED INEQUALITY CONSTRAINTS 3.1 A Maximum Principle for Problems with Mixed Inequality Constraints State equation: ẋ = f(x,u,t), x(0) = x 0 (1) where x(t) E n and u(t) E m, and f : E n E m E 1 E n is assumed to be continuously differentiable. p. 3/73

4 PROBLEMS WITH MIXED INEQUALITY CONSTRAINTS CONT. Objective function: max { J = T 0 F(x,u,t)dt + S[x(T),T] }, (2) where F : E n E m E 1 E 1 and S : E n E 1 E 1 are continuously differentiable functions and T denotes the terminal time. For each t [0,T], u(t) is admissible if it is piecewise continuous and satisfies the mixed constraints g(x,u,t) 0, t [0,T], (3) where g: E n E m E 1 E q is continuously differentiable. p. 4/73

5 TERMINAL STATE The terminal state is constrained by the following inequality and equality constraints: a(x(t),t) 0, (4) b(x(t),t) = 0, (5) where a : E n E 1 E l a and b : En E 1 E l b are continuously differentiable. p. 5/73

6 SPECIAL CASES An interesting case of the terminal inequality constraints is x(t) Y X, (6) where Y is a convex set and X is the reachable set from the initial state x 0. X={x(T) x(t) obtained by an admissible control u and (1)}. Remarks: 1. The constraint (6) does not depend explicitly on T. 2. The feasible set defined by (4) and (5) need not be convex. 3. (6) may not be expressible by a simple set of inequalities. p. 6/73

7 FULL RANK CONDITIONS OR CONSTRAINT QUALIFICATIONS rank [ g/ u, diag(g)] = q holds for all arguments x(t), u(t), t, and rank [ a/ x diag(a) b/ x 0 hold for all possible values of x(t) and T. ] = l a + l b The first means that the gradients of active constraints in (3) with respect to u are linearly independent. The second means that the gradients of the equality constraints (5) and of the active inequality constraints in (4) are linearly independent. p. 7/73

8 HAMILTONIAN FUNCTION The Hamiltonian function H :E n E m E n E 1 E 1 is: H[x,u,λ,t] := F(x,u,t) + λf(x,u,t), (7) where λ E n (a row vector) is called the adjoint vector or the costate vector. Recall that λ provides the marginal valuation of increases in x. p. 8/73

9 LAGRANGIAN FUNCTION The Lagrangian function L:E n E m E n E q E 1 E 1 as L[x,u,λ,µ,t] := H(x,u,λ,t) + µg(x,u,t), (8) where µ E q is a row vector, whose components are called Lagrange multipliers. Lagrange multipliers satisfy the complimentary slackness conditions µ 0, µg(x,u,t) = 0. (9) p. 9/73

10 ADJOINT VECTOR The adjoint vector satisfies the differential equation with the boundary conditions λ = L x [x,u,λ,µ,t] (10) λ(t) = S x (x(t),t) + αa x (x(t),t) + βb x (x(t),t), α 0, αa(x(t),t) = 0, where α E l a and β El b are constant vectors. p. 10/73

11 NECESSARY CONDITIONS The necessary conditions for u (corresponding state x ) to be an optimal solution are that there exist λ, µ, α, and β which satisfy the following: ẋ = f(x, u, t), x (0) = x 0, satisfying the terminal constraints a(x (T), T) 0 and b(x (T), T) = 0, λ = L x [x, u, λ, µ, t] with the transversality conditions λ(t) = S x (x (T), T) + αa x (x (T), T) + βb x (x (T), T), α 0, αa(x (T), T) = 0, the Hamiltonian maximizing condition H[x (t), u (t), λ(t), t] H[x (t), u, λ(t), t] at each t [0, T] for all u satisfying g[x (t), u, t] 0, and the Lagrange multipliers µ(t) are such that L H u u=u (t) := u + µ g u=u u (t) = 0 and the complementary slackness conditions µ(t) 0, µ(t)g(x, u, t) = 0 hold. (11) p. 11/73

12 SPECIAL CASE In the case of the terminal constraint (6), the terminal conditions on the state and the adjoint variables in (11) will be replaced, respectively, by and x (T) Y X (12) [λ(t) S x (x (T),T)][y x (T)] 0, y Y. (13) p. 12/73

13 SPECIAL CASE Furthermore, if the terminal time T in the problem (1)-(5) is unspecified, there is an additional necessary transversality condition for T to be optimal (see Exercise 3.5), namely, H[x (T ),u (T ),λ(t ),T ] + S T [x (T ),T ] = 0, (14) if T (0, ). p. 13/73

14 REMARK 3.1 AND 3.2 Remark 3.1: We should have H = λ 0 F + λf in (7) with λ 0 0. However, we can set λ 0 = 1 in most applications. Remark 3.2: If the set Y in (6) consists of a single point Y = {k}, making the problem a fixed-end-point problem, then the transversality condition reduces to simply λ(t) equals a constant β to be determined, since x (T) = k. In this case the salvage function S becomes a constant, and can therefore be disregarded. p. 14/73

15 EXAMPLE 3.1 Consider the problem: max { J = 1 0 } udt subject to ẋ = u, x(0) = 1, (15) u 0, x u 0. (16) Note that constraints (16) are of the mixed type (3). They can also be rewritten as 0 u x. p. 15/73

16 SOLUTION OF EXAMPLE 3.1 The Hamiltonian is H = u + λu = (1 + λ)u, so that the optimal control has the form u = bang[0,x; 1 + λ]. (17) To get the adjoint equation and the multipliers associated with constraints (16), we form the Lagrangian: L = H + µ 1 u + µ 2 (x u) = µ 2 x + (1 + λ + µ 1 µ 2 )u. p. 16/73

17 From this we get the adjoint equation SOLUTION OF EXAMPLE 3.1 CONT. λ = L x = µ 2, λ(1) = 0. (18) Also note that the optimal control must satisfy L u = 1 + λ + µ 1 µ 2 = 0, (19) and µ 1 and µ 2 must satisfy the complementary slackness conditions µ 1 0, µ 1 u = 0, (20) µ 2 0, µ 2 (x u) = 0. (21) p. 17/73

18 SOLUTION OF EXAMPLE 3.1 CONT. It is obvious for this simple problem that u (t) = x(t) should be the optimal control for all t [0, 1]. We now show that this control satisfies all the conditions of the Lagrangian form of the maximum principle. Since x(0) = 1, the control u = x gives x = e t as the solution of (15). Because x = e t > 0, it follows that u = x > 0; thus µ 1 = 0 from (20). p. 18/73

19 SOLUTION OF EXAMPLE 3.1 CONT. From (19) we then have µ 2 = 1 + λ. Substituting this into (18) and solving gives 1 + λ(t) = e 1 t. (22) Since the right-hand side of (22) is always positive, u = x satisfies (17). Notice that µ 2 = e 1 t 0 and x u = 0, so (21) holds. p. 19/73

20 SUFFICIENCY CONDITIONS: CONCAVE AND QUASICONCAVE FUNCTIONS Let D E n be a convex set. A function ψ : D E 1 is concave, if for all y,z D and for all p [0, 1], ψ(py + (1 p)z) pψ(y) + (1 p)ψ(z). (23) The function ψ is quasiconcave if (23) is relaxed to ψ(py + (1 p)z) min{ψ(y),ψ(z)}. (24) ψ is strictly concave if y z and p (0, 1) and (23) holds with a strict inequality. ψ is convex, quasiconvex, or strictly convex if ψ is concave, quasiconcave, or strictly concave, respectively. p. 20/73

21 SUFFICIENCY CONDITIONS: THEOREM 3.1 Let (x,u,λ,µ,α,β) satisfy the necessary conditions in (11). If H(x,u,λ(t),t) is concave in (x,u) at each t [0,T], S in (2) is concave in x, g in (3) is quasiconcave in (x,u), a in (4) is quasiconcave in x, and b in (5) is linear in x, then (x,u ) is optimal. p. 21/73

22 REMARK ON THE CONCAVITY CONDITION IN THEOREM 3.1 The concavity of the Hamiltonian with respect to (x,u) is a crucial condition in Theorem 3.1. Unfortunately, a number of management science and economics models lead to problems that do not satisfy this concavity condition. We replace the concavity requirement on the Hamiltonian in Theorem 3.1 by a concavity requirement on H 0, where H 0 (x,λ,t) = max H(x,u,λ,t). (25) {u g(x,u,t) 0} p. 22/73

23 THEOREM 3.2 Theorem 3.1 remains valid if H 0 (x (t),λ(t),t) = H(x (t),u (t),λ(t),t), t [0,T], and, if in addition, we drop the quasiconcavity requirement on g and replace the concavity requirement on H in Theorem 3.1 by the following assumption: For each t [0,T], if we define A 1 (t) = {x g(x,u,t) 0 for some u}, then H 0 (x,λ(t),t) is concave on A 1 (t), if A 1 (t) is convex. If A 1 (t) is not convex, we assume that H 0 has a concave extension to co(a 1 (t)), the convex hull of A 1 (t). p. 23/73

24 3.3 CURRENT-VALUE FORMULATION Assume a constant continuous discount rate ρ 0. The time dependence of the relevant functions comes only through the discount factor. Thus, F(x,u,t) = φ(x,u)e ρt and S(x,T) = σ(x)e ρt. Now, the objective is to maximize { J = T 0 φ(x,u)e ρt dt + σ[x(t)]e ρt } (26) subject to (1) and (3)-(5). p. 24/73

25 3.3 CURRENT-VALUE FORMULATION CONT. The standard Hamiltonian is H s := e ρt φ(x,u) + λ s f(x,u,t) (27) and the standard Lagrangian is L s := H s + µ s g(x,u,t). (28) p. 25/73

26 3.3 CURRENT-VALUE FORMULATION CONT. The standard adjoint variables λ s and standard multipliers µ s, α s and β s satisfy λ s = L s x, (29) λ s (T) = S x [x(t),t] + α s a x (x(t),t) + β s b x (x(t),t) = e ρt σ x [x(t)] + α s a x (x(t),t) + β s b x (x(t),t), (30) α s 0, α s a(x(t),t) = 0, (31) µ s 0, µ s g = 0. (32) p. 26/73

27 The current-value Hamiltonian 3.3 CURRENT-VALUE FORMULATION CONT. H[x,u,λ,t] := φ(x,u) + λf(x,u,t) (33) and the current-value Lagrangian L[x,u,λ,µ,t] := H + µg(x,u,t). (34) We define λ := e ρt λ s and µ := e ρt µ s, (35) so that we can rewrite (27) and (28) as H = e ρt H s and L = e ρt L s. (36) p. 27/73

28 From (35), we have 3.3 CURRENT-VALUE FORMULATION CONT. λ = ρe ρt λ s + e ρt λs. (37) Then from (29), λ = ρλ L x, λ(t) = σ x [x(t)] + αa x (x(t),t) + βb x (x(t),t), (38) where (38) follows from the terminal condition for λ s (T) in (30), the definition (36), α = e ρt α s and β = e ρt β s. (39) p. 28/73

29 3.3 CURRENT-VALUE FORMULATION CONT. The complimentary slackness conditions satisfied by the current-value Lagrange multipliers µ and α are µ 0, µg = 0, α 0, and αa = 0 on account of (31), (32), (35), and (39). From (14), the necessary transversality condition for T to be optimal is H[x (T ),u (T ),λ(t ),T ] ρσ[x (T )] = 0. (40) p. 29/73

30 THE CURRENT-VALUE MAXIMUM PRINCIPLE ẋ = f(x, u, t), a(x (T), T) 0, b(x (T), T) = 0, λ = ρλ L x [x, u, λ, µ, t], with the terminal conditions λ(t) = σ x (x (T)) + αa x (x (T), T) + βb x (x (T), T), α 0, αa(x (T), T) = 0, and the Hamiltonian maximizing condition H[x (t), u (t), λ(t), t] H[x (t), u, λ(t), t] at each t [0, T] for all u satisfying g[x (t), u, t] 0, and the Lagrange multipliers µ(t) are such that L u u=u (t) = 0, and the complementary slackness conditions µ(t) 0 and µ(t)g(x, u, t) = 0 hold. (41) p. 30/73

31 SPECIAL CASE When the terminal constraint is given by (6) instead of (4) and (5), we need to replace the terminal condition on the state and the adjoint variables, respectively, by (12) and [λ(t) σ x (x (T))][y x (T)] 0, y Y. (42) p. 31/73

32 Use the current-value maximum principle to solve the following consumption problem for ρ = r: max { J = T subject to the wealth dynamics 0 e ρt ln C(t)dt Ẇ = rw C, W(0) = W 0, W(T) = 0, } EXAMPLE 3.2 where W 0 > 0. Note that the condition W(T) = 0 is sufficient to make W(t) 0 for all t. We can interpret ln C(t) as the utility of consuming at the rate C(t) per unit time at time t. p. 32/73

33 SOLUTION OF EXAMPLE 3.2 The current-value Hamiltonian is H = ln C + λ(rw C), (43) where the adjoint equation, under the assumption ρ = r, is λ = ρλ H W = ρλ rλ = 0, λ(t) = β, (44) where β is some constant to be determined. The solution of (44) is simply λ(t) = β for 0 t T. p. 33/73

34 SOLUTION OF EXAMPLE 3.2 CONT. To find the optimal control, we maximize H by differentiating (43) with respect to C and setting the result to zero: H C = 1 C λ = 0, which implies C = 1/λ = 1/β. Using this consumption level in the wealth dynamics gives which can be solved as Ẇ = rw 1, W(T) = 0, β W(t) = W 0 e rt 1 βr (ert 1). p. 34/73

35 SOLUTION OF EXAMPLE 3.2 CONT. Setting W(T) = 0 gives β = 1 e rt rw 0. Therefore, the optimal consumption C (t) = 1 β = rw 0 1 e rt = ρw 0, since ρ = r. 1 e ρt p. 35/73

36 3.4 TERMINAL CONDITIONS/TRANSVERSALITY CONDITIONS Case 1: Free-end point. From the terminal conditions in (11), it is obvious that for the free-end-point problem, i.e., when Y = X, If σ(x) 0, then λ(t) = 0. λ(t) = σ x [x (T)]. (45) Case 2: Fixed-end point. The terminal condition is b(x(t),t) = x(t) k = 0, and the transversality condition in (11) does not provide any information for λ(t). λ(t) will be some constant β. p. 36/73

37 3.4 TERMINAL CONDITIONS/TRANSVERSALITY CONDITIONS CONT. Case 3: One-sided constraints. The ending value of the state variable is in a one-sided interval, namely, a(x(t),t) = x(t) k 0, where k X. In this case it is possible to show that and λ(t) σ x [x (T)] (46) {λ(t) σ x [x (T)]}{x (T) k} = 0. (47) For σ(x) 0, these terminal conditions can be written as λ(t) 0 and λ(t)[x (T) k] = 0. (48) p. 37/73

38 3.4 TERMINAL CONDITIONS/TRANSVERSALITY CONDITIONS CONT. Case 4: A general case. A general ending condition is x(t) Y X. p. 38/73

39 TABLE 3.1 SUMMARY OF THE TRANSVERSALITY CONDITIONS Constraint Description λ(t) λ(t) on x(t) when σ 0 x(t) Y = X Free-end λ(t) = σ x [x (T)] λ(t) = 0 point x(t) = k X, Fixed-end λ(t) = a constant λ(t) = a constant i.e., Y = {k} point to be determined to be determined x(t) X [k, ), One-sided λ(t) σ x [x (T)] λ(t) 0 i.e., Y = {x x k} constraints and and x(t) k {λ(t) σ x [x (T)]}{x (T) k} = 0 λ(t)[x (T) k] = 0 x(t) X (, k], One-sided λ(t) σ x [x (T)] λ(t) 0 i.e., Y = {x x k} constraints and and x(t) k {λ(t) σ x [x (T)]}{k x (T)} = 0 λ(t)[k x (T)] = 0 x(t) Y X General {λ(t) σ x [x (T)]}{y x (T)} 0 λ(t)[y x (T)] 0 constraints y Y y Y p. 39/73

40 EXAMPLE 3.3 Consider the problem: max { J = 2 0 } xdt subject to ẋ = u, x(0) = 1, x(2) 0, (49) 1 u 1. (50) p. 40/73

41 SOLUTION OF EXAMPLE 3.3 The Hamiltonian is H = x + λu. Clearly the optimal control has the form The adjoint equation is u = bang[ 1, 1;λ]. (51) λ = 1 (52) with the transversality conditions λ(2) 0 and λ(2)x(2) = 0. (53) p. 41/73

42 SOLUTION OF EXAMPLE 3.3 CONT. Since λ(t) is monotonically increasing, the control (51) can switch at most once, and it can only switch from u = 1 to u = 1. Let the switching time be t 2. Then the optimal control is u (t) = { 1 for 0 t t, +1 for t < t 2. Since the control switches at t, λ(t ) must be 0. Solving (52) we get λ(t) = t t. (54) p. 42/73

43 SOLUTION OF EXAMPLE 3.3 CONT. There are two cases t < 2 and t = 2. We analyze the first case first. Here λ(2) = 2 t > 0; therefore from (53), x(2) = 0. Solving for x with u given in (54), we obtain x(t) = { 1 t for 0 t t, (t t ) + x(t ) = t + 1 2t for t < t 2. Therefore, setting x(2) = 0 gives x(2) = 3 2t = 0, which makes t = 3/2. Since this satisfies t < 2, we do not have to deal with the case t = 2. p. 43/73

44 FIGURE 3.1 STATE AND ADJOINT TRAJECTORIES IN EXAMPLE 3.3 p. 44/73

45 ISOPERIMETRIC OR BUDGET CONSTRAINT It is of the form: T 0 l(x,u,t)dt K, (55) where l : E n E m E 1 E 1 is assumed nonnegative, bounded, and continuously differentiable, and K is a positive constant representing the amount of the budget. To see how this constraint can be converted into a one-sided constraint, we define an additional state variable x n+1 by the state equation ẋ n+1 = l(x,u,t), x n+1 (0) = K, x n+1 (T) 0. (56) p. 45/73

46 EXAMPLES ILLUSTRATING TERMINAL CONDITIONS Example 3.4 The problem is: max { J = T 0 } e ρt ln C(t)dt + e ρt BW(T) (57) subject to the wealth equation Ẇ = rw C, W(0) = W 0, W(T) 0. (58) Assume B to be a given positive constant. p. 46/73

47 SOLUTION OF EXAMPLE 3.4 The Hamiltonian for the problem is given in (43), and the adjoint equation is given in (44) except that the transversality conditions are from Row 3 of Table 3.1: λ(t) B, [λ(t) B]W(T) = 0. (59) In Example 3.2 the value of β, which was the terminal value of the adjoint variable, was β = 1 e rt rw 0. We now have two cases: (i) β B and (ii) β < B. p. 47/73

48 SOLUTION OF EXAMPLE 3.4 CONT. In case (i), the solution of the problem is the same as that of Example 3.2, because by setting λ(t) = β and recalling that W(T) = 0 in that example, it follows that (59) holds. In case (ii), we set λ(t) = B and use (44) which is λ = 0. Hence, λ(t) = B for all t. The Hamiltonian maximizing condition remains unchanged. Therefore, the optimal consumption is: C = 1 λ = 1 B. p. 48/73

49 SOLUTION OF EXAMPLE 3.4 CONT. Solving (58) with this C gives It is easy to show that W(t) = W 0 e rt 1 Br (ert 1). W(T) = W 0 e rt 1 Br (ert 1) is nonnegative since β < B. Note that (59) holds for case (ii). p. 49/73

50 EXAMPLE 3.5: A TIME-OPTIMAL CONTROL PROBLEM Consider a subway train of mass m (assume m = 1), which moves along a smooth horizontal track with negligible friction. The position x of the train along the track at time t is determined by Newton s Second Law of Motion, i.e., ẍ = mu = u. (60) Note: (60) is a second-order differential equation. p. 50/73

51 EXAMPLE 3.5: A TIME-OPTIMAL CONTROL PROBLEM Let the initial conditions on x(0) and ẋ(0) be x(0) = x 0 and ẋ(0) = y 0, Transform (60) to two first-order differential equations: { ẋ = y, x(0) = x 0, ẏ = u, y(0) = y 0. (61) Let the control constraint be u Ω = [ 1, 1]. (62) p. 51/73

52 EXAMPLE 3.5: A TIME-OPTIMAL CONTROL PROBLEM CONT. The problem is: max subject to { J = T 0 1dt } ẋ = y, x(0) = x 0, x(t) = 0, ẏ = u, y(0) = y 0, y(t) = 0, and the control constraint u Ω = [ 1, 1]. (63) p. 52/73

53 SOLUTION OF EXAMPLE 3.5 The standard Hamiltonian function in this case is H = 1 + λ 1 y + λ 2 u, where the adjoint variables λ 1 and λ 2 satisfy and λ 1 = 0, λ 1 (T) = β 1 and λ 2 = λ 1, λ 2 (T) = β 2, λ 1 = β 1 and λ 2 = β 2 + β 1 (T t). The Hamiltonian maximizing condition yields the form of the optimal control to be u (t) = bang{ 1, 1; β 2 + β 1 (T t)}. (64) p. 53/73

54 SOLUTION OF EXAMPLE 3.5 CONT. The transversality condition (14) with y(t) = 0 and S 0 yields H + S T = λ 2 (T)u (T) 1 = β 2 u (T) 1 = 0, which together with the bang-bang control policy (64) implies either λ 2 (T) = β 2 = 1 and u (T) = 1, or λ 2 (T) = β 2 = +1 and u (T) = +1. p. 54/73

55 TABLE 3.2 STATE TRAJECTORIES AND SWITCHING CURVE (a) u (τ) = 1 for (t τ T) (b) u (τ) = +1 for (t τ T) y = T t y = t T x = (T t) 2 /2 x = (t T) 2 /2 Γ : x = y 2 /2 for y 0 Γ + : x = y 2 /2 for y 0 p. 55/73

56 SOLUTION OF EXAMPLE 3.5 CONT. We can put Γ and Γ + into a single switching curve Γ as y = Γ(x) = { 2x, x 0, + 2x, x < 0. (65) If the initial state (x 0,y 0 ) lies on the switching curve, then we use u = +1 (resp., u = 1) if (x 0,y 0 ) lies on Γ + (resp., Γ ). In common parlance, we apply the brakes. If the initial state (x 0,y 0 ) is not on the switching curve, then we choose, between u = 1 and u = 1, that which moves the system toward the switching curve. By inspection, it is obvious that above the switching curve we must choose u = 1 and below we must choose u = +1. p. 56/73

57 FIGURE 3.2 MINIMUM TIME OPTIMAL RESPONSE FOR PROBLEM (63) p. 57/73

58 SOLUTION OF EXAMPLE 3.5 CONT. The other curves in Figure 3.2 are solutions of the differential equations starting from initial points (x 0,y 0 ). If (x 0,y 0 ) lies above the switching curve Γ as shown in Figure 3.2, we use u = 1 to compute the curve as follows: ẋ = y, x(0) = x 0, ẏ = 1, y(0) = y 0. Integrating these equations gives y = t + y 0, x = t2 2 + y 0t + x 0. Elimination of t between these two gives x = y2 0 y2 2 + x 0. (66) p. 58/73

59 SOLUTION OF EXAMPLE 3.5 CONT. (66) is the equation of the parabola in Figure 3.2 through (x 0,y 0 ). The point of intersection of the curve (66) with the switching curve Γ + is obtained by solving (66) and the equation for Γ +, namely 2x = y 2, simultaneously. This gives x = y x 0 4, y = (y x 0)/2, (67) where the minus sign in the expression for y in (67) was chosen since the intersection occurs when y is negative. The time t to reach the switching curve, called the switching time, given that we start above it, is t = y 0 y = y 0 + (y x 0)/2. (68) p. 59/73

60 SOLUTION OF EXAMPLE 3.5 CONT. To find the minimum total time to go from the starting point (x 0,y 0 ) to the origin (0,0), we substitute t into the equation for Γ + in Column (b) of Table 3.2. This gives T = t y = y 0 + 2(y x 0). (69) As a numerical example, start at the point (x 0,y 0 ) =(1,1). Then, the equation of the parabola (66) is 2x = 3 y 2. The switching point (67) is (3/4, 3/2). Finally, the switching time is t = 1 + 3/2 from (68). Substituting into (69), we find the minimum time to stop is T = p. 60/73

61 SOLUTION OF EXAMPLE 3.5 CONT. To complete the solution of this numerical example let us evaluate β 1 and β 2, which are needed to obtain λ 1 and λ 2. Since (1,1) is above the switching curve, u (T) = 1, and therefore β 2 = 1. To compute β 1, we observe that λ 2 (t ) = β 2 + β 1 (T t ) = 0 so that β 1 = β 2 /(T t ) = 1/ 3/2 = 2/3. In Exercises , you are asked to work other examples with different starting points above, below, and on the switching curve. Note that t = 0 by definition, if the starting point is on the switching curve. p. 61/73

62 Transversality conditions: Free-end-point: 3.5 INFINITE HORIZON AND STATIONARITY lim T λs (T) = 0 lim T e ρt λ(t) = 0. (70) One-sided constraints: lim x(t) 0. T Then, the transversality conditions are lim T e ρt λ(t) 0 and lim T e ρt λ(t)x (T) = 0. (71) p. 62/73

63 3.5 INFINITE HORIZON AND STATIONARITY CONT. Stationarity Assumption: f(x,u,t) = f(x,u), g(x,u,t) = g(x,u). (72) Long-run stationary equilibrium is defined by the quadruple { x,ū, λ, µ} satisfying f( x,ū) = 0, ρ λ = L x [ x,ū, λ, µ], µ 0, µg( x,ū) = 0, and H( x,ū, λ) H( x,u, λ) for all u satisfying g( x,u) 0. (73) p. 63/73

64 3.5 INFINITE HORIZON AND STATIONARITY CONT. Clearly, if the initial condition x 0 = x, the optimal control is u (t) = ū for all t. If x 0 x, the optimal solution will have a transient phase. If the constraint involving g is not imposed, µ may be dropped from the quadruple. In this case, the equilibrium is defined by the triple { x, ū, λ} satisfying f( x, ū) = 0, ρ λ = H x ( x, ū, λ), and H u ( x, ū, λ) = 0. (74) p. 64/73

65 EXAMPLE 3.6 Consider the problem: max { J = 0 } e ρt ln C(t)dt subject to lim W(T) 0, (75) T Ẇ = rw C, W(0) = W 0 > 0. (76) p. 65/73

66 SOLUTION OF EXAMPLE 3.6 By (73) we set r W C = 0, λ = β, where β is a constant to be determined. This gives the optimal control C = r W, and by setting λ = 1/ C = 1/r W, we see all the conditions of (73) including the Hamiltonian maximizing condition hold. p. 66/73

67 SOLUTION OF EXAMPLE 3.6 CONT. Furthermore, λ and W = W 0 satisfy the transversality conditions (71). Therefore, by the sufficiency theorem, the control obtained is optimal. Note that the interpretation of the solution is that the trust spends only the interest from its endowment W 0. Note further that the triple ( W, C, λ) = (W 0,rW 0, 1/rW 0 ) is an optimal long-run stationary equilibrium for the problem. p. 67/73

68 TABLE3.3: OBJECTIVE, STATE, AND ADJOINT EQUATIONS FOR VARIOUS MODEL TYPES Objective State Current-Value Form of Optimal Function Equation Adjoint Equation Control Policy Integrand φ = ẋ = f = λ = (a) Cx + Du Ax + Bu + d λ(ρ A) C Bang-Bang (b) C(x) + Du Ax + Bu + d λ(ρ A) C x Bang-Bang+Singular (c) x T Cx + u T Du Ax + Bu + d λ(ρ A) 2x T C Linear Decision Rule (d) C(x) + Du A(x) + Bu + d λ(ρ A x ) C x Bang-Bang+Singular (e) c(x) + q(u) (ax + d)b(u) + e(x) λ(ρ ab(u) e x ) c x Interior or Boundary (f) c(x)q(u) (ax + d)b(u) + e(x) λ(ρ ab(u) e x ) c x q(u) Interior or Boundary p. 68/73

69 3.6 MODEL TYPES In Model Type (a) of Table 3.3 we see that both φ and f are linear functions of their arguments. Hence it is called the linear-linear case. The Hamiltonian is H = Cx + Du + λ(ax + Bu + d) = Cx + λax + λd + (D + λb)u. (77) Model Type (b) of Table 3.3 is the same as Model Type (a) except that the function C(x) is nonlinear. p. 69/73

70 3.6 MODEL TYPES CONT. Model Type (c) has linear functions in the state equation and quadratic functions in the objective function. Model Type (d) is a more general version of Model Type (b) in which the state equation is nonlinear in x. In Model Types (e) and (f), the functions are scalar functions, and there is only one state equation so that λ is also a scalar function. p. 70/73

71 REMARKS 3.3 AND 3.4 Remark 3.3: In order to use the absolute value function u of a control variable u in forming the functions φ or f. We define u + and u satisfying the following relations: We write u := u + u, u + 0, u 0, (78) u + u = 0. (79) We need not impose (79) explicitly. u = u + + u. (80) Remark 3.4: Tables 3.1 and 3.3 are constructed for continuous-time models. p. 71/73

72 REMARK 3.5 Remark 3.5: Consider Model Types (a) and (b) when the control variable constraints are defined by linear inequalities of the form g(u,t) = g(t)u 0. (81) Then, the problem of maximizing the Hamiltonian function becomes: max(d + λb)u subject to g(t)u 0. (82) p. 72/73

73 REMARKS 3.6 AND 3.7 Remark 3.6:The salvage value part of the objective function, S[x(T),T], makes sense in two cases: (a) When T is free, and part of the problem is to determine the optimal terminal time. (b) When T is fixed and we want to maximize the salvage value of the ending state x(t), which in this case can be written simply as S[x(T)]. Remark 3.7: One important model type that we did not include in Table 3.3 is the impulse control model of Bensoussan and Lions. In this model, an infinite control is instantaneously exerted on a state variable in order to cause a finite jump in its value. p. 73/73

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67 CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal

More information

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/53

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/53 CHAPTER 7 APPLICATIONS TO MARKETING Chapter 7 p. 1/53 APPLICATIONS TO MARKETING State Equation: Rate of sales expressed in terms of advertising, which is a control variable Objective: Profit maximization

More information

OPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS

OPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS OPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS (SECOND EDITION, 2000) Suresh P. Sethi Gerald. L. Thompson Springer Chapter 1 p. 1/37 CHAPTER 1 WHAT IS OPTIMAL CONTROL THEORY?

More information

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer

More information

Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC)

Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC) Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC) Mahdi Ghazaei, Meike Stemmann Automatic Control LTH Lund University Problem Formulation Find the maximum of the

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

CHAPTER 9 MAINTENANCE AND REPLACEMENT. Chapter9 p. 1/66

CHAPTER 9 MAINTENANCE AND REPLACEMENT. Chapter9 p. 1/66 CHAPTER 9 MAINTENANCE AND REPLACEMENT Chapter9 p. 1/66 MAINTENANCE AND REPLACEMENT The problem of determining the lifetime of an asset or an activity simultaneously with its management during that lifetime

More information

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer

More information

We now impose the boundary conditions in (6.11) and (6.12) and solve for a 1 and a 2 as follows: m 1 e 2m 1T m 2 e (m 1+m 2 )T. (6.

We now impose the boundary conditions in (6.11) and (6.12) and solve for a 1 and a 2 as follows: m 1 e 2m 1T m 2 e (m 1+m 2 )T. (6. 158 6. Applications to Production And Inventory For ease of expressing a 1 and a 2, let us define two constants b 1 = I 0 Q(0), (6.13) b 2 = ˆP Q(T) S(T). (6.14) We now impose the boundary conditions in

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Optimal control theory with applications to resource and environmental economics

Optimal control theory with applications to resource and environmental economics Optimal control theory with applications to resource and environmental economics Michael Hoel, August 10, 2015 (Preliminary and incomplete) 1 Introduction This note gives a brief, non-rigorous sketch of

More information

Lecture 6: Discrete-Time Dynamic Optimization

Lecture 6: Discrete-Time Dynamic Optimization Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Introduction to Continuous-Time Dynamic Optimization: Optimal Control Theory

Introduction to Continuous-Time Dynamic Optimization: Optimal Control Theory Econ 85/Chatterjee Introduction to Continuous-ime Dynamic Optimization: Optimal Control heory 1 States and Controls he concept of a state in mathematical modeling typically refers to a specification of

More information

Homework Solution # 3

Homework Solution # 3 ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Math Camp Notes: Everything Else

Math Camp Notes: Everything Else Math Camp Notes: Everything Else Systems of Dierential Equations Consider the general two-equation system of dierential equations: Steady States ẋ = f(x, y ẏ = g(x, y Just as before, we can nd the steady

More information

Basic Techniques. Ping Wang Department of Economics Washington University in St. Louis. January 2018

Basic Techniques. Ping Wang Department of Economics Washington University in St. Louis. January 2018 Basic Techniques Ping Wang Department of Economics Washington University in St. Louis January 2018 1 A. Overview A formal theory of growth/development requires the following tools: simple algebra simple

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko Continuous-time optimization involves maximization wrt to an innite dimensional object, an entire

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

The Growth Model in Continuous Time (Ramsey Model)

The Growth Model in Continuous Time (Ramsey Model) The Growth Model in Continuous Time (Ramsey Model) Prof. Lutz Hendricks Econ720 September 27, 2017 1 / 32 The Growth Model in Continuous Time We add optimizing households to the Solow model. We first study

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Problem 1 Cost of an Infinite Horizon LQR

Problem 1 Cost of an Infinite Horizon LQR THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework

More information

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions BEEM03 UNIVERSITY OF EXETER BUSINESS School January 009 Mock Exam, Part A OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions Duration : TWO HOURS The paper has 3 parts. Your marks on the rst part will be

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Solution by the Maximum Principle

Solution by the Maximum Principle 292 11. Economic Applications per capita variables so that it is formally similar to the previous model. The introduction of the per capita variables makes it possible to treat the infinite horizon version

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Tutorial on Control and State Constrained Optimal Control Problems

Tutorial on Control and State Constrained Optimal Control Problems Tutorial on Control and State Constrained Optimal Control Problems To cite this version:. blems. SADCO Summer School 211 - Optimal Control, Sep 211, London, United Kingdom. HAL Id: inria-629518

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Differential Games, Distributed Systems, and Impulse Control

Differential Games, Distributed Systems, and Impulse Control Chapter 12 Differential Games, Distributed Systems, and Impulse Control In previous chapters, we were mainly concerned with the optimal control problems formulated in Chapters 3 and 4 and their applications

More information

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1 Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

CHAPTER 12 DIFFERENTIAL GAMES, DISTRIBUTED SYSTEMS,

CHAPTER 12 DIFFERENTIAL GAMES, DISTRIBUTED SYSTEMS, CHAPTER 12 DIFFERENTIAL GAMES, DISTRIBUTED SYSTEMS, AND IMPULSE CONTROL Chapter 12 p. 1/91 DIFFERENTIAL GAMES, DISTRIBUTED SYSTEMS, AND IMPULSE CONTROL Theory of differential games: There may be more than

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

7 OPTIMAL CONTROL 7.1 EXERCISE 1. Solve the following optimal control problem. max. (x u 2 )dt x = u x(0) = Solution with the first variation

7 OPTIMAL CONTROL 7.1 EXERCISE 1. Solve the following optimal control problem. max. (x u 2 )dt x = u x(0) = Solution with the first variation 7 OPTIMAL CONTROL 7. EXERCISE Solve the following optimal control problem max 7.. Solution with the first variation The Lagrangian is L(x, u, λ, μ) = (x u )dt x = u x() =. [(x u ) λ(x u)]dt μ(x() ). Now

More information

Deterministic Optimal Control

Deterministic Optimal Control page A1 Online Appendix A Deterministic Optimal Control As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein

More information

OPTIMAL CONTROL CHAPTER INTRODUCTION

OPTIMAL CONTROL CHAPTER INTRODUCTION CHAPTER 3 OPTIMAL CONTROL What is now proved was once only imagined. William Blake. 3.1 INTRODUCTION After more than three hundred years of evolution, optimal control theory has been formulated as an extension

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4. Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Dynamic Problem Set 1 Solutions

Dynamic Problem Set 1 Solutions Dynamic Problem Set 1 Solutions Jonathan Kreamer July 15, 2011 Question 1 Consider the following multi-period optimal storage problem: An economic agent imizes: c t} T β t u(c t ) (1) subject to the period-by-period

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Minimum Fuel Optimal Control Example For A Scalar System

Minimum Fuel Optimal Control Example For A Scalar System Minimum Fuel Optimal Control Example For A Scalar System A. Problem Statement This example illustrates the minimum fuel optimal control problem for a particular first-order (scalar) system. The derivation

More information

Final Exam - Math Camp August 27, 2014

Final Exam - Math Camp August 27, 2014 Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that

More information

Formula Sheet for Optimal Control

Formula Sheet for Optimal Control Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming

More information

Theory and Applications of Constrained Optimal Control Proble

Theory and Applications of Constrained Optimal Control Proble Theory and Applications of Constrained Optimal Control Problems with Delays PART 1 : Mixed Control State Constraints Helmut Maurer 1, Laurenz Göllmann 2 1 Institut für Numerische und Angewandte Mathematik,

More information

Pontryagin s Minimum Principle 1

Pontryagin s Minimum Principle 1 ECE 680 Fall 2013 Pontryagin s Minimum Principle 1 In this handout, we provide a derivation of the minimum principle of Pontryagin, which is a generalization of the Euler-Lagrange equations that also includes

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth

More information

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

More information

ECON 5111 Mathematical Economics

ECON 5111 Mathematical Economics Test 1 October 1, 2010 1. Construct a truth table for the following statement: [p (p q)] q. 2. A prime number is a natural number that is divisible by 1 and itself only. Let P be the set of all prime numbers

More information

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming 1. Endogenous Growth with Human Capital Consider the following endogenous growth model with both physical capital (k (t)) and human capital (h (t)) in continuous time. The representative household solves

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

Optimization Over Time

Optimization Over Time Optimization Over Time Joshua Wilde, revised by Isabel Tecu and Takeshi Suzuki August 26, 21 Up to this point, we have only considered constrained optimization problems at a single point in time. However,

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Notes on Control Theory

Notes on Control Theory Notes on Control Theory max t 1 f t, x t, u t dt # ẋ g t, x t, u t # t 0, t 1, x t 0 x 0 fixed, t 1 can be. x t 1 maybefreeorfixed The choice variable is a function u t which is piecewise continuous, that

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

Dynamical Systems. August 13, 2013

Dynamical Systems. August 13, 2013 Dynamical Systems Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 Dynamical Systems are systems, described by one or more equations, that evolve over time.

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of

More information

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: 1-5: Least-squares I A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: f (x) =(Ax b) T (Ax b) =x T A T Ax 2b T Ax + b T b f (x) = 2A T Ax 2A T b = 0 Chih-Jen Lin (National

More information

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0). Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Alberto Bressan. Department of Mathematics, Penn State University

Alberto Bressan. Department of Mathematics, Penn State University Non-cooperative Differential Games A Homotopy Approach Alberto Bressan Department of Mathematics, Penn State University 1 Differential Games d dt x(t) = G(x(t), u 1(t), u 2 (t)), x(0) = y, u i (t) U i

More information

How to Characterize Solutions to Constrained Optimization Problems

How to Characterize Solutions to Constrained Optimization Problems How to Characterize Solutions to Constrained Optimization Problems Michael Peters September 25, 2005 1 Introduction A common technique for characterizing maximum and minimum points in math is to use first

More information

Mathematical Economics: Lecture 16

Mathematical Economics: Lecture 16 Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave

More information

Deterministic Optimal Control

Deterministic Optimal Control Online Appendix A Deterministic Optimal Control As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein (1879

More information

Differentiable Welfare Theorems Existence of a Competitive Equilibrium: Preliminaries

Differentiable Welfare Theorems Existence of a Competitive Equilibrium: Preliminaries Differentiable Welfare Theorems Existence of a Competitive Equilibrium: Preliminaries Econ 2100 Fall 2017 Lecture 19, November 7 Outline 1 Welfare Theorems in the differentiable case. 2 Aggregate excess

More information

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 UNCONSTRAINED OPTIMIZATION 1. Consider the problem of maximizing a function f:ú n 6 ú within a set A f ú n. Typically, A might be all of ú

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information