CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67

Size: px
Start display at page:

Download "CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67"

Transcription

1 CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67

2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal control. Necessary conditions for optimization of dynamic systems. General derivation by Pontryagin et al. in A simple (but not completely rigorous) proof using dynamic programming. Examples. Statement of sufficiency conditions. Computational method. Chapter2 p. 2/67

3 2.1 STATEMENT OF THE PROBLEM Optimal control theory deals with the problem of optimization of dynamic systems. Chapter2 p. 3/67

4 2.1.1 THE MATHEMATICAL MODEL State equation ẋ(t) = f(x(t),u(t),t), x(0) = x 0, (1) where the vector of state variables, x(t) E n, the vector of control variables, u(t) E m, and the function f : E n E m E 1 E n. The function f is assumed to be continuously differentiable. Chapter2 p. 4/67

5 2.1.1 THE MATHEMATICAL MODEL CONT. The path x(t),t [0,T], is called a state trajectory and u(t), t [0, T], is called a control trajectory. Admissible control u(t), t [0,T], is piecewise continuous and satisfies, in addition, u(t) Ω(t) E m, t [0,T]. (2) Chapter2 p. 5/67

6 2.1.3 THE OBJECTIVE FUNCTION Objective function is defined as follows J = T 0 F(x(t),u(t),t)dt + S[x(T),T], (3) where the functions F : E n E m E 1 E 1 and S : E n E 1 E 1 are assumed to be continuously differentiable. Chapter2 p. 6/67

7 2.1.4 THE OPTIMAL CONTROL PROBLEM The problem is to find an admissible control u, which maximizes the objective function (3) subject to the state equation (1) and the control constraints (2). We now restate the optimal control problem as: max u(t) Ω(t) subject to { J = T ẋ = f(x,u,t), x(0) = x 0. 0 } F(x,u,t)dt + S[x(T),T] The control u is called an optimal control and x is called the corresponding optimal trajectory. The optimal value of the objective function will be denoted as J(u ) or J. (4) Chapter2 p. 7/67

8 THREE CASES Case 1. The optimal control problem (4) is said to be in Bolza form. Case 2. When S 0, it is said to be in Lagrange form. Case 3. When F 0, it is said to be in Mayer form. If F 0 and S is linear, it is in linear Mayer form, i.e., max {J = cx(t)} u(t) Ω(t) subject to ẋ = f(x,u,t), x(0) = x 0, (5) where c = (c 1,c 2,,c n ) E n. Chapter2 p. 8/67

9 The Bolza form can be reduced to the linear Mayer form: FORMS Introduce a new state vector y = (y 1,y 2,...,y n+1 ), having n + 1 components defined as follows: y i = x i for i = 1,...,n, and ẏ n+1 = F(x,u,t) + S(x,t) x f(x,u,t) + S(x,t), t y n+1 (0) = S(x 0, 0), (6) so that the objective function is J = ĉy(t) = y n+1 (T), where ĉ = (0, 0,...,0, 1). If we now integrate (6) from 0 to T, we have }{{} n J = ĉy(t) = y n+1 (T) = T 0 F(x,u,t)dt + S[x(T),T], (7) which is the same as the objective function J in (4). Chapter2 p. 9/67

10 PRINCIPLE OF OPTIMALITY Statement of Bellman s Optimality Principle "An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision." Chapter2 p. 10/67

11 PRINCIPLE OF OPTIMALITY CONT. A B J J BE AB JBCE E ASSERTION: If ABE (shown in blue) is an optimal path from A to E, then BE (in blue) is an optimal path from B to E PROOF. Suppose it is not. Then there is another path (existence is assumed here) BCE (in red), which is optimal from B to E, i.e., J BCE > J BE. But then J ABE = J AB + J BE < J AB + J BCE = J ABCE. This contradicts the hypothesis that ABE is an optimal path from A to E. C Chapter2 p. 11/67

12 A DYNAMIC PROGRAMMING EXAMPLE Stagecoach Problem Stages Costs (c i,j ): Chapter2 p. 12/67

13 A DYNAMIC PROGRAMMING EXAMPLE CONT. Stagecoach Problem Stages Note that is a greedy (or myopic) path that minimizes cost at each stage. However, this may not be the minimal cost solution. For example, yields a cheaper cost than Chapter2 p. 13/67

14 SOLUTION TO THE DP EXAMPLE Let 1 u 1 u 2 u 3 10 be the optimal path. Let f n (s,u n ) be the minimal cost at stage n given that the current state is s and the decision taken is u n. Let f n(s) be the minimal cost at stage n if the current state is s. Then f n(s) = min u n f(s,u n ) = min u n { } c s,un + fn+1(u n ). This is the Recursion Equation of DP. It can be solved by a backward procedure, which starts at the terminal stage and stops at the initial stage. Chapter2 p. 14/67

15 SOLUTION TO THE DP EXAMPLE CONT. Stage 4. s f 4 (s) u Stage 3. s f 3 (s, u 3 ) = C s,u3 + f4 (u 3) f3 (s) u = = = = = = Chapter2 p. 15/67

16 SOLUTION TO THE DP EXAMPLE CONT. Stage 2. Stage 1. s f 2 (s, u 2 ) = C s,u2 + f3 (u 2) f2 (s) u = = = , = = = = = = , s f 1 (s, u 1 ) = C s,u1 + f 2 (u 1) f 1 (s) u = = = ,4 The optimal paths: Total Cost = 11. Chapter2 p. 16/67

17 2.2 DYNAMIC PROGRAMMING AND THE MAXIMUM PRINCIPLE The Hamilton-Jacobi-Bellman Equation { T V (x,t) = max u(s) Ω(s) t where for s t, } F(x(s),u(s),s)ds + S(x(T),T), (8) dx ds = f(x(s),u(s),s), x(t) = x. Chapter2 p. 17/67

18 FIGURE 2.1 AN OPTIMAL PATH IN THE STATE-TIME SPACE Chapter2 p. 18/67

19 TWO CHANGES IN THE OBJECTIVE FUNCTION (i) The incremental change in J from t to t + δt is given by the integral of F(x,u,t) from t to t + δt. (ii) The value function V (x + δx,t + δt) at time t + δt. In equation form this is { t+δt V (x,t) = F[x(τ),u(τ),τ]dτ max u(τ) Ω(τ) τ [t,t+δt] t } +V [x(t + δt),t + δt], (9) Since F is a continuous function, the integral in (9) is approximately F(x,u,t)δt so that we can rewrite (9) as V (x,t) = max {F(x,u,t)δt + V [x(t + δt),t + δt]}+o(δt). u Ω(t) (10) Chapter2 p. 19/67

20 TWO CHANGES IN THE OBJECTIVE FUNCTION CONT. Assume that the value function V is continuously differentiable. Use the Taylor series expansion of V with respect to δt and obtain V [x(t + δt),t + δt] = V (x,t) + [V x (x,t)ẋ + V t (x,t)]δt + o(δt), (11) Substituting for ẋ from (1), we obtain { V (x,t) = max F(x,u,t)δt + V (x,t) u Ω(t) } +V x (x,t)f(x,u,t)δt + V t (x,t)δt + o(δt). (12) Cancelling V (x,t) on both sides and then dividing by δt we get { } 0 = max F(x,u,t) + V x (x,t)f(x,u,t) + V t (x,t) + o(δt). u Ω(t) δt (13) Chapter2 p. 20/67

21 TWO CHANGES IN THE OBJECTIVE FUNCTION CONT. Letting δt 0 in (10) gives the equation 0 = max {F(x,u,t) + V x (x,t)f(x,u,t) + V t (x,t)}, (14) u Ω(t) for which the boundary condition is V (x,t) = S(x,T). (15) V x (x,t) can be interpreted as the marginal contribution vector of the state variable x to the maximized objective function. Denote it by the adjoint (row) vector λ(t) E n, i.e., λ(t) = V x (x (t),t) := V x (x,t) x=x (t). (16) Chapter2 p. 21/67

22 We introduce the so-called Hamiltonian or, simply, HAMILTON-JACOBI-BELLMAN EQUATION H[x,u,V x,t] = F(x,u,t) + V x (x,t)f(x,u,t) (17) H(x,u,λ,t) = F(x,u,t) + λf(x,u,t). (18) We can rewrite equation (14) as the following equation, 0 = max [H(x,u,V x,t) + V t ], (19) u Ω(t) called the Hamilton-Jacobi-Bellman equation or, simply, the HJB equation. Chapter2 p. 22/67

23 HAMILTONIAN MAXIMIZING CONDITION From (19), we can get the Hamiltonian maximizing condition of the maximum principle, H[x (t),u (t),λ(t),t] + V t (x (t),t) H[x (t),u,λ(t),t] Cancelling the term V t on both sides, we obtain for all u Ω(t). +V t (x (t),t). (20) H[x (t),u (t),λ(t),t] H[x (t),u,λ(t),t] (21) Remark: H decouples the problem over time by means of λ(t), which is analogous to dual variables or shadow prices in Linear Programming. Chapter2 p. 23/67

24 2.2.2 DERIVATION OF THE ADJOINT EQUATION Let x(t) = x (t) + δx(t), (22) where δx(t) < ε for a small positive ε. Fix t and use the HJB equation in (19) as H[x (t),u (t),v x (x (t),t),t] + V t (x (t),t) H[x(t),u (t),v x (x(t),t),t] + V t (x(t),t). (23) LHS= 0 from (19) since u (t) maximizes H + V t. RHS will be zero if u (t) also maximizes H + V t with x(t) as the state. In general x(t) x (t), and thus RHS 0. But then RHS x(t)=x (t) = 0 RHS is maximized at x(t) = x (t). Chapter2 p. 24/67

25 2.2.2 DERIVATION OF THE ADJOINT EQUATION CONT. Since, x(t) is unconstrained, we have RHS x x(t)=x (t) = 0, or H x [x (t),u (t),v x (x (t),t),t] + V tx (x (t),t) = 0. (24) By definition of H, F x +V x f x +f T V xx +V tx = F x +V x f x +(V xx f) T +V tx = 0. (25) Note: (25) assumes V to be twice continuously differentiable. See (1.16) or Exercise 1.9 for details. Chapter2 p. 25/67

26 2.2.2 DERIVATION OF THE ADJOINT EQUATION By definition (1.16) of λ(t), the row vector λ(t) is dv x dt = ( ) dvx1 dt, dv x2 dt,, dv xn dt = (V x1 xẋ + V x1 t,v x2 xẋ + V x2 t,,v xn xẋ + V xn t) = ( n i=1 V x 1 x i x i, n = (V xx ẋ) T + V xt = (V xx f) T + V tx. Note V x1 x = (V x1 x 1, V x1 x 2,..., V x1 x n ), and CONT. i=1 V x 2 x i x i,, n i=1 V ) x n x i x i + (Vx ) t (26) ¼ V x1 x 1 V x1 x 2... V x1 x n ¼ ½ ẋ 1 ½ V xx ẋ = V x2 x 1 V x2 x 2... V x2 x n.. ẋ 2.,a column vector. V xn x 1 V xn x 2... V xn x n ẋ n Chapter2 p. 26/67

27 2.2.2 DERIVATION OF THE ADJOINT EQUATION CONT. Using (25) and (26) we have Using (16) we have dv x dt = F x V x f x. (27) λ = F x λf x. From (18) we have λ = H x. (28) Chapter2 p. 27/67

28 TERMINAL BOUNDARY OR TRANSVERSALITY CONDITION: λ(t) = S(x,T) x=x(t) = S x [x(t),t]. (29) x (28) and (29) can determine the adjoint variables. From (18), we can rewrite the state equation as ẋ = f = H λ. (30) From (28), (29), (30) and (1), we get ẋ = H λ, x(0) = x 0, λ = H x, λ(t) = S x [x(t),t], (31) called a canonical system of equations or canonical adjoints. Chapter2 p. 28/67

29 FREE AND FIXED END POINT PROBLEMS In our problem x(t) is free. In this case, S 0 (no salvage value) λ(t) = 0 However, if x(t) is not free, that is, x(t) fixed (given) λ(t) = a constant to be determined. In other words, λ(t) is free. Chapter2 p. 29/67

30 2.2.3 THE MAXIMUM PRINCIPLE The necessary conditions for u to be an optimal control are: ẋ = f(x,u,t),x (0) = x 0, λ = H x [x,u,λ,t], λ(t) = S x [x (T),T], H[x (t),u (t),λ(t),t] H[x (t),u,λ(t),t], (32) for all u Ω(t), t [0,T]. Chapter2 p. 30/67

31 2.2.4 ECONOMIC INTERPRETATION OF THE MAXIMUM PRINCIPLES J = T 0 F(x,u,t)dt + S[x(T),T], where F is the instantaneous profit rate per unit of time, and S[x,T] is the salvage value. Multiplying (18) formally by dt and using the state equation (1) gives Hdt = Fdt + λfdt = Fdt + λẋdt = Fdt + λdx. F(x, u, t)dt: direct contribution to J from t to t + dt λdx: indirect contribution to J due to added capital dx Hdt: total contribution to J from time t to t + dt, when x(t) = x and u(t) = u in the interval [t, t + dt]. Chapter2 p. 31/67

32 2.2.4 ECONOMIC INTERPRETATION OF THE MAXIMUM PRINCIPLES CONT. By (28) and (29), we have λ = H x = F x λ f x, λ(t) = S x[x(t),t]. Rewrite the first equation as dλ = H x dt = F x dt + λf x dt. dλ: marginal cost of holding capital x from t to t + dt H x dt: marginal revenue of investing the capital F x dt: direct marginal contribution λf x dt: indirect marginal contribution Thus, the adjoint equation implies MC = MR. Chapter2 p. 32/67

33 EXAMPLE 2.1 Consider the problem: max { J = 1 0 } xdt (33) subject to the state equation and the control constraint ẋ = u, x(0) = 1 (34) u Ω = [ 1, 1]. (35) Note that T = 1, F = x, S = 0, and f = u. Because F = x, we can interpret the problem as one of minimizing the (signed) area under the curve x(t) for 0 t 1. Chapter2 p. 33/67

34 SOLUTION OF EXAMPLE 2.1 We form the Hamiltonian H = x + λu. (36) Because the Hamiltonian is linear in u, the form of the optimal control, i.e., the one that would maximize the Hamiltonian, is 1 if λ(t) > 0, u (t) = undefined if λ(t) = 0, 1 if λ(t) < 0. It is called Bang-Bang Control. In the notation of Section 1.4, (37) u (t) = bang[ 1, 1;λ(t)]. (38) Chapter2 p. 34/67

35 SOLUTION OF EXAMPLE 2.1 CONT. To find λ, we write the adjoint equation λ = H x = 1, λ(1) = S x [x(t),t] = 0. (39) Because this equation does not involve x and u, we can easily solve it as λ(t) = t 1. (40) It follows that λ(t) = t 1 0 for all t [0, 1]. So u (t) = 1, t [0, 1). Since λ(1) = 0, we can also set u (1) = 1, which defines u at the single point t = 1. We thus have the optimal control u (t) = 1 for t [0, 1]. Chapter2 p. 35/67

36 SOLUTION OF EXAMPLE 2.1 CONT. Substituting this into the state equation (34) we have ẋ = 1, x(0) = 1, (41) whose solution is x(t) = 1 t for t [0, 1]. (42) The graphs of the optimal state and adjoint trajectories appear in Figure 2.2. Note that the optimal value of the objective function is J = 1/2. Chapter2 p. 36/67

37 FIGURE 2.2 OPTIMAL STATE AND ADJOINT TRAJECTORIES FOR EXAMPLE 2.1 Chapter2 p. 37/67

38 EXAMPLE 2.2 Let us solve the same problem as in Example 2.1 over the interval [0, 2] so that the objective is to { 2 maximize J = 0 } xdt. (43) As before, the dynamics and constraints are (33) and (34), respectively. Here we want to minimize the signed area between the horizontal axis and the trajectory of x(t) for 0 t 2. Chapter2 p. 38/67

39 SOLUTION OF EXAMPLE 2.2 As before the Hamiltonian is defined by (36) and the optimal control is as in (38). The adjoint equation λ = 1, λ(2) = 0 (44) is the same as (39), except that now T = 2 instead of T = 1. The solution of (44) is easily found to be λ(t) = t 2, t [0, 2]. (45) Hence the state equation (41) and its solution (42) are exactly the same. The graph of λ(t) is shown in Figure 2.3. The optimal value of the objective function is J = 0. Chapter2 p. 39/67

40 FIGURE 2.3 OPTIMAL STATE AND ADJOINT TRAJECTORIES FOR EXAMPLE 2.2 Chapter2 p. 40/67

41 EXAMPLE 2.3 The next example is: max { J = } 2 x2 dt (46) subject to the same constraints as in Example 2.1, namely, ẋ = u, x(0) = 1, u Ω = [ 1, 1]. (47) Here F = (1/2)x 2 so that the interpretation of the objective function (46) is that we are trying to find the trajectory x(t) in order that the area under the curve (1/2)x 2 is minimized. Chapter2 p. 41/67

42 SOLUTION OF EXAMPLE 2.3 The Hamiltonian is H = 1 2 x2 + λu, (48) which is linear in u so that the optimal policy is The adjoint equation is u (t) = bang [ 1, 1;λ]. (49) λ = H x = x, λ(1) = 0. (50) Here the adjoint equation involves x so that we cannot solve it directly. Because the state equation (47) involves u, which depends on λ, we also cannot integrate it independently without knowing λ. Chapter2 p. 42/67

43 SOLUTION OF EXAMPLE 2.3 CONT. A way out of this dilemma is to use some intuition. Since we want to minimize the area under (1/2)x 2 and since x(0) = 1, it is clear that we want x to decrease as quickly as possible. Let us therefore temporarily assume that λ is nonpositive in the interval [0, 1] so that from (49) we have u = 1 throughout the interval. (In Exercise 2.5, you will be asked to show that this assumption is correct.) With this assumption, we can solve (47) as x(t) = 1 t. (51) Chapter2 p. 43/67

44 SOLUTION OF EXAMPLE 2.3 CONT. Substituting this into (50) gives λ = 1 t. Integrating both sides of this equation from t to 1 gives or 1 t λ(τ)dτ = 1 t (1 τ)dτ, λ(1) λ(t) = (τ 1 2 τ2 ) 1 t, which, using λ(1) = 0, yields λ(t) = 1 2 t2 + t 1 2. (52) Chapter2 p. 44/67

45 SOLUTION OF EXAMPLE 2.3 CONT. The reader may now verify that λ(t) is nonpositive in the interval [0, 1], justifying our original assumption. Hence, (51) and (52) satisfy the necessary conditions. In Exercise 2.6, you will be asked to show that they satisfy sufficient conditions derived in Section 2.4 as well, so that they are indeed optimal. Figure 2.4 shows the graphs of the optimal trajectories. Chapter2 p. 45/67

46 FIGURE 2.4 OPTIMAL TRAJECTORIES FOR EXAMPLE 2.3 AND EXAMPLE 2.4 * t - Chapter2 p. 46/67

47 EXAMPLE 2.4 Let us rework Example 2.3 with T = 2, i.e., solve the problem: { 2 max J = 1 } 2 x2 dt subject to the constraints ẋ = u, x(0) = 1, u Ω = [ 1, 1]. 0 (53) It would be clearly optimal if we could keep x (t) = 0, t [1, 2]. This is possible by setting u (t) = 0, t [1, 2]. Note u (t) = 0, t [1, 2], is a singular control, since it gives λ(t) = 0, t [1, 2]. See Fig Chapter2 p. 47/67

48 EXAMPLE 2.5 The problem is: subject to max { J = 2 0 } (2x 3u u 2 )dt (54) ẋ = x + u, x(0) = 5 (55) and the control constraint u Ω = [0, 2]. (56) Chapter2 p. 48/67

49 SOLUTION OF EXAMPLE 2.5 The Hamiltonian is H = (2x 3u u 2 ) + λ(x + u) = (2 + λ)x (u 2 + 3u λu). (57) The optimal control is The adjoint equation is u(t) = λ(t) 3. (58) 2 λ = H x = 2 λ, λ(2) = 0. (59) Chapter2 p. 49/67

50 SOLUTION OF EXAMPLE 2.5 CONT. Using the integrating factor as shown in Appendix A.6, its solution is λ(t) = 2(e 2 t 1). The optimal control is 2 if e 2 t 2.5 > 2, u (t) = e 2 t 2.5 if 0 e 2 t 2.5 2, 0 if e 2 t 2.5 < 0. (60) It can be written as u (t) = sat [0, 2;e 2 t 2.5]. Chapter2 p. 50/67

51 FIGURE 2.5 OPTIMAL CONTROL FOR EXAMPLE 2.5 u e 2 t u (t) 0 t t t Chapter2 p. 51/67

52 2.4 SUFFICIENT CONDITIONS The Envelope Theorem gives Here is why? H 0 (x,λ,t) = max H(x,u,λ,t). (61) u Ω(t) H 0 (x,λ,t) = H(x,u 0,λ,t). (62) H 0 x(x,λ,t) = H x (x,u 0,λ,t). (63) H 0 x(x,λ,t) = H x (x,u 0,λ,t) + H u (x,u 0,λ,t) u0 x. (64) But H u (x,u 0,λ,t) u0 x = 0, (65) since either H ui = 0 or u 0 i/ x j = 0 or both. Chapter2 p. 52/67

53 THEOREM 2.1 (Sufficiency Conditions) Let u (t), and the corresponding x (t) and λ(t) satisfy the maximum principle necessary condition (32) for all t [0,T]. Then, u is an optimal control if H 0 (x,λ(t),t) is concave in x for each t and S(x,T) is concave in x. Chapter2 p. 53/67

54 PROOF OF THEOREM 2.1 By definition of H 0, H[x(t),u(t),λ(t),t] H 0 [x(t),λ(t),t]. (66) Since H 0 is differentiable and concave, H 0 [x(t),λ(t),t] H 0 [x (t),λ(t),t] (67) +Hx[x 0 (t),λ(t),t][x(t) x (t)]. Using the Envelope Theorem, we can write H[x(t),u(t),λ(t),t] H[x (t),u (t),λ(t),t] (68) +H x [x (t),u (t),λ(t),t][x(t) x (t)]. Chapter2 p. 54/67

55 PROOF OF THEOREM 2.1 CONT. By the definition of H and the adjoint equation, F[x(t),u(t),t] + λ(t)f[x(t),u(t),t] (69) F[x (t),u (t),t] + λ(t)f[x (t),u (t),t] λ(t)[x(t) x (t)]. Using the state equation, F[x (t),u (t),t] F[x(t),u(t),t] λ(t)[x(t) x (t)] + λ(t)[ẋ(t) ẋ (t)]. (70) Chapter2 p. 55/67

56 PROOF OF THEOREM 2.1 CONT. Since S is differentiable and concave S[x(T),T] S[x (T),T] + S x [x (T),T][x(T) x (T)] (71) or, S[x (T),T] S[x(T),T] + S x [x (T),T][x(T) x (T)] 0. Integrating (70) and adding (72), J(u ) J(u) + S x [x (T),T][x(T) x (T)] (72) λ(t)[x(t) x (T)] λ(0)[x(0) x (0)], (73) J(u ) J(u). (74) Chapter2 p. 56/67

57 EXAMPLE 2.6 Let us show that the problems in Examples 2.1 and 2.2 satisfy the sufficient conditions. We have from (36) and (61), H 0 = x + λu 0, where u 0 is given by (37). Since u 0 is a function of λ only, H 0 (x,λ,t) is certainly concave in x for any t and λ. Since S(x, T) = 0, the sufficient conditions hold. Chapter2 p. 57/67

58 FIXED-END-POINT PROBLEM The terminal values of the state variables are completely specified, i.e., x(t) = k E, where k is a vector of constants. Recall that the proof of the sufficient conditions requires (73) to hold, i.e., J(u ) J(u) + S x [x (T),T][x(T) x (T)] λ(t)[x(t) x (T)] λ(0)[x(0) x (0)]. Since x(t) x (T) = 0, the RHS vanishes regardless of the value of λ(t) in this case. This means that the sufficiency result would go through for any value of λ(t). Not surprisingly, therefore, the transversality condition is λ(t) = β. (75) Chapter2 p. 58/67

59 SOLVING A TPBVP BY USING SPREADSHEET SOFTWARE Example 2.7. Consider the problem: { 1 max J = 1 } 2 (x2 + u 2 )dt subject to 0 ẋ = x 3 + u, x(0) = 5. (76) Chapter2 p. 59/67

60 SOLVING A TPBVP BY USING SPREADSHEET SOFTWARE CONT. Solution: λ = x + 3x 2 λ, λ(1) = 0. (77) ẋ = x 3 + λ, x(0) = 5. (78) x(t + t) = x(t) + [ x(t) 3 + λ(t)] t, x(0) = 5, (79) λ(t + t) = λ(t) +[x(t) +3x(t) 2 λ(t)] t, λ(1) = 0. (80) Chapter2 p. 60/67

61 SOLVING A TPBVP BY USING SPREADSHEET Set t = 0.01, λ(0) = 0.2 and x(0) = 5. SOFTWARE CONT. Chapter2 p. 61/67

62 SOLVING A TPBVP BY USING SPREADSHEET SOFTWARE CONT. Chapter2 p. 62/67

63 SOLVING A TPBVP BY USING SPREADSHEET SOFTWARE CONT. Chapter2 p. 63/67

64 SOLVING A TPBVP BY USING SPREADSHEET SOFTWARE CONT. Chapter2 p. 64/67

65 SOLVING A TPBVP BY USING SPREADSHEET SOFTWARE CONT. Chapter2 p. 65/67

66 SOLVING A TPBVP BY USING SPREADSHEET SOFTWARE CONT. Chapter2 p. 66/67

67 SOLVING A TPBVP BY USING SPREADSHEET SOFTWARE CONT. Chapter2 p. 67/67

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73 CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Chapter 5. Pontryagin s Minimum Principle (Constrained OCP)

Chapter 5. Pontryagin s Minimum Principle (Constrained OCP) Chapter 5 Pontryagin s Minimum Principle (Constrained OCP) 1 Pontryagin s Minimum Principle Plant: (5-1) u () t U PI: (5-2) Boundary condition: The goal is to find Optimal Control. 2 Pontryagin s Minimum

More information

Notes on Control Theory

Notes on Control Theory Notes on Control Theory max t 1 f t, x t, u t dt # ẋ g t, x t, u t # t 0, t 1, x t 0 x 0 fixed, t 1 can be. x t 1 maybefreeorfixed The choice variable is a function u t which is piecewise continuous, that

More information

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Hamilton-Jacobi-Bellman Equation Feb 25, 2008 Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS

More information

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer

More information

Deterministic Optimal Control

Deterministic Optimal Control Online Appendix A Deterministic Optimal Control As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein (1879

More information

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/53

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/53 CHAPTER 7 APPLICATIONS TO MARKETING Chapter 7 p. 1/53 APPLICATIONS TO MARKETING State Equation: Rate of sales expressed in terms of advertising, which is a control variable Objective: Profit maximization

More information

Introduction to Continuous-Time Dynamic Optimization: Optimal Control Theory

Introduction to Continuous-Time Dynamic Optimization: Optimal Control Theory Econ 85/Chatterjee Introduction to Continuous-ime Dynamic Optimization: Optimal Control heory 1 States and Controls he concept of a state in mathematical modeling typically refers to a specification of

More information

CHAPTER 12 DIFFERENTIAL GAMES, DISTRIBUTED SYSTEMS,

CHAPTER 12 DIFFERENTIAL GAMES, DISTRIBUTED SYSTEMS, CHAPTER 12 DIFFERENTIAL GAMES, DISTRIBUTED SYSTEMS, AND IMPULSE CONTROL Chapter 12 p. 1/91 DIFFERENTIAL GAMES, DISTRIBUTED SYSTEMS, AND IMPULSE CONTROL Theory of differential games: There may be more than

More information

We now impose the boundary conditions in (6.11) and (6.12) and solve for a 1 and a 2 as follows: m 1 e 2m 1T m 2 e (m 1+m 2 )T. (6.

We now impose the boundary conditions in (6.11) and (6.12) and solve for a 1 and a 2 as follows: m 1 e 2m 1T m 2 e (m 1+m 2 )T. (6. 158 6. Applications to Production And Inventory For ease of expressing a 1 and a 2, let us define two constants b 1 = I 0 Q(0), (6.13) b 2 = ˆP Q(T) S(T). (6.14) We now impose the boundary conditions in

More information

Deterministic Optimal Control

Deterministic Optimal Control page A1 Online Appendix A Deterministic Optimal Control As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein

More information

Differential Games, Distributed Systems, and Impulse Control

Differential Games, Distributed Systems, and Impulse Control Chapter 12 Differential Games, Distributed Systems, and Impulse Control In previous chapters, we were mainly concerned with the optimal control problems formulated in Chapters 3 and 4 and their applications

More information

Stochastic and Adaptive Optimal Control

Stochastic and Adaptive Optimal Control Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function: Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

CHAPTER 9 MAINTENANCE AND REPLACEMENT. Chapter9 p. 1/66

CHAPTER 9 MAINTENANCE AND REPLACEMENT. Chapter9 p. 1/66 CHAPTER 9 MAINTENANCE AND REPLACEMENT Chapter9 p. 1/66 MAINTENANCE AND REPLACEMENT The problem of determining the lifetime of an asset or an activity simultaneously with its management during that lifetime

More information

OPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS

OPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS OPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS (SECOND EDITION, 2000) Suresh P. Sethi Gerald. L. Thompson Springer Chapter 1 p. 1/37 CHAPTER 1 WHAT IS OPTIMAL CONTROL THEORY?

More information

The Envelope Theorem

The Envelope Theorem The Envelope Theorem In an optimization problem we often want to know how the value of the objective function will change if one or more of the parameter values changes. Let s consider a simple example:

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer

More information

Objective. 1 Specification/modeling of the controlled system. 2 Specification of a performance criterion

Objective. 1 Specification/modeling of the controlled system. 2 Specification of a performance criterion Optimal Control Problem Formulation Optimal Control Lectures 17-18: Problem Formulation Benoît Chachuat Department of Chemical Engineering Spring 2009 Objective Determine the control

More information

Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications

Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications Navin Khaneja lectre notes taken by Christiane Koch Jne 24, 29 1 Variation yields a classical Hamiltonian system Sppose that

More information

arxiv: v1 [q-fin.mf] 5 Jul 2016

arxiv: v1 [q-fin.mf] 5 Jul 2016 arxiv:607.037v [q-fin.mf] 5 Jul 206 Dynamic optimization and its relation to classical and quantum constrained systems. Mauricio Contreras, Rely Pellicer and Marcelo Villena. July 6, 206 We study the structure

More information

Optimal control theory with applications to resource and environmental economics

Optimal control theory with applications to resource and environmental economics Optimal control theory with applications to resource and environmental economics Michael Hoel, August 10, 2015 (Preliminary and incomplete) 1 Introduction This note gives a brief, non-rigorous sketch of

More information

EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games

EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games S. S. Sastry REVISED March 29th There exist two main approaches to optimal control and dynamic games: 1. via the Calculus of Variations

More information

OPTIMAL CONTROL CHAPTER INTRODUCTION

OPTIMAL CONTROL CHAPTER INTRODUCTION CHAPTER 3 OPTIMAL CONTROL What is now proved was once only imagined. William Blake. 3.1 INTRODUCTION After more than three hundred years of evolution, optimal control theory has been formulated as an extension

More information

Problem 1 Cost of an Infinite Horizon LQR

Problem 1 Cost of an Infinite Horizon LQR THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework

More information

Functional differentiation

Functional differentiation Functional differentiation March 22, 2016 1 Functions vs. functionals What distinguishes a functional such as the action S [x (t] from a function f (x (t, is that f (x (t is a number for each value of

More information

Static and Dynamic Optimization (42111)

Static and Dynamic Optimization (42111) Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko Continuous-time optimization involves maximization wrt to an innite dimensional object, an entire

More information

Ordinary differential equations. Phys 420/580 Lecture 8

Ordinary differential equations. Phys 420/580 Lecture 8 Ordinary differential equations Phys 420/580 Lecture 8 Most physical laws are expressed as differential equations These come in three flavours: initial-value problems boundary-value problems eigenvalue

More information

Hamilton-Jacobi theory

Hamilton-Jacobi theory Hamilton-Jacobi theory November 9, 04 We conclude with the crowning theorem of Hamiltonian dynamics: a proof that for any Hamiltonian dynamical system there exists a canonical transformation to a set of

More information

Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC)

Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC) Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC) Mahdi Ghazaei, Meike Stemmann Automatic Control LTH Lund University Problem Formulation Find the maximum of the

More information

Calculus of Variations

Calculus of Variations ECE 68 Midterm Exam Solution April 1, 8 1 Calculus of Variations This exam is open book and open notes You may consult additional references You may even discuss the problems (with anyone), but you must

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197 MATH 56A SPRING 8 STOCHASTIC PROCESSES 197 9.3. Itô s formula. First I stated the theorem. Then I did a simple example to make sure we understand what it says. Then I proved it. The key point is Lévy s

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

INDIRECT METHODS FOR SPACE TRAJECTORY OPTIMIZATION Lorenzo Casalino

INDIRECT METHODS FOR SPACE TRAJECTORY OPTIMIZATION Lorenzo Casalino INDIRECT METHODS FOR SPACE TRAJECTORY OPTIMIZATION Lorenzo Casalino Outline introduction indirect methods: how to make them optimization of discrete systems optimization of continuous systems calculus

More information

Pontryagin s Minimum Principle 1

Pontryagin s Minimum Principle 1 ECE 680 Fall 2013 Pontryagin s Minimum Principle 1 In this handout, we provide a derivation of the minimum principle of Pontryagin, which is a generalization of the Euler-Lagrange equations that also includes

More information

Theory and Applications of Constrained Optimal Control Proble

Theory and Applications of Constrained Optimal Control Proble Theory and Applications of Constrained Optimal Control Problems with Delays PART 1 : Mixed Control State Constraints Helmut Maurer 1, Laurenz Göllmann 2 1 Institut für Numerische und Angewandte Mathematik,

More information

SWEEP METHOD IN ANALYSIS OPTIMAL CONTROL FOR RENDEZ-VOUS PROBLEMS

SWEEP METHOD IN ANALYSIS OPTIMAL CONTROL FOR RENDEZ-VOUS PROBLEMS J. Appl. Math. & Computing Vol. 23(2007), No. 1-2, pp. 243-256 Website: http://jamc.net SWEEP METHOD IN ANALYSIS OPTIMAL CONTROL FOR RENDEZ-VOUS PROBLEMS MIHAI POPESCU Abstract. This paper deals with determining

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University Introduction to Optimal Control Theory and Hamilton-Jacobi equations Seung Yeal Ha Department of Mathematical Sciences Seoul National University 1 A priori message from SYHA The main purpose of these series

More information

Numerical Optimal Control Overview. Moritz Diehl

Numerical Optimal Control Overview. Moritz Diehl Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize

More information

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4. Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation

More information

7 OPTIMAL CONTROL 7.1 EXERCISE 1. Solve the following optimal control problem. max. (x u 2 )dt x = u x(0) = Solution with the first variation

7 OPTIMAL CONTROL 7.1 EXERCISE 1. Solve the following optimal control problem. max. (x u 2 )dt x = u x(0) = Solution with the first variation 7 OPTIMAL CONTROL 7. EXERCISE Solve the following optimal control problem max 7.. Solution with the first variation The Lagrangian is L(x, u, λ, μ) = (x u )dt x = u x() =. [(x u ) λ(x u)]dt μ(x() ). Now

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Formula Sheet for Optimal Control

Formula Sheet for Optimal Control Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming

More information

Optimal Control. with. Aerospace Applications. James M. Longuski. Jose J. Guzman. John E. Prussing

Optimal Control. with. Aerospace Applications. James M. Longuski. Jose J. Guzman. John E. Prussing Optimal Control with Aerospace Applications by James M. Longuski Jose J. Guzman John E. Prussing Published jointly by Microcosm Press and Springer 2014 Copyright Springer Science+Business Media New York

More information

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring Lecture 3: Hamilton-Jacobi-Bellman Equations Distributional Macroeconomics Part II of ECON 2149 Benjamin Moll Harvard University, Spring 2018 1 Outline 1. Hamilton-Jacobi-Bellman equations in deterministic

More information

Lecture Notes of EE 714

Lecture Notes of EE 714 Lecture Notes of EE 714 Lecture 1 Motivation Systems theory that we have studied so far deals with the notion of specified input and output spaces. But there are systems which do not have a clear demarcation

More information

Introduction to Algorithmic Trading Strategies Lecture 4

Introduction to Algorithmic Trading Strategies Lecture 4 Introduction to Algorithmic Trading Strategies Lecture 4 Optimal Pairs Trading by Stochastic Control Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Problem formulation Ito s lemma

More information

Partial Differential Equations Separation of Variables. 1 Partial Differential Equations and Operators

Partial Differential Equations Separation of Variables. 1 Partial Differential Equations and Operators PDE-SEP-HEAT-1 Partial Differential Equations Separation of Variables 1 Partial Differential Equations and Operators et C = C(R 2 ) be the collection of infinitely differentiable functions from the plane

More information

Chapter 4 Differentiation

Chapter 4 Differentiation Chapter 4 Differentiation 08 Section 4. The derivative of a function Practice Problems (a) (b) (c) 3 8 3 ( ) 4 3 5 4 ( ) 5 3 3 0 0 49 ( ) 50 Using a calculator, the values of the cube function, correct

More information

Optimal Control Theory - Module 3 - Maximum Principle

Optimal Control Theory - Module 3 - Maximum Principle Optimal Control Theory - Module 3 - Maximum Principle Fall, 215 - University of Notre Dame 7.1 - Statement of Maximum Principle Consider the problem of minimizing J(u, t f ) = f t L(x, u)dt subject to

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Advanced Mechatronics Engineering

Advanced Mechatronics Engineering Advanced Mechatronics Engineering German University in Cairo 21 December, 2013 Outline Necessary conditions for optimal input Example Linear regulator problem Example Necessary conditions for optimal input

More information

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Optimal Control. McGill COMP 765 Oct 3 rd, 2017 Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps

More information

Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems.

Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems. ES22 Lecture Notes #11 Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems. J = φ(x( ) + L(x,u,t)dt ; x= f(x,u,t)

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

Optimal Control Theory

Optimal Control Theory Adv. Scientific Computing III Optimal Control Theory Catalin Trenchea Department of Mathematics University of Pittsburgh Fall 2008 Adv. Scientific Computing III > Control systems in Banach spaces Let X

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Continuous Time Finance

Continuous Time Finance Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale

More information

Elements of Optimal Control Theory Pontryagin s Principle

Elements of Optimal Control Theory Pontryagin s Principle Elements of Optimal Control Theory Pontryagin s Principle STATEMENT OF THE CONTROL PROBLEM Given a system of ODEs (x 1,...,x n are state variables, u is the control variable) 8 dx 1 = f 1 (x 1,x 2,,x n

More information

Tutorial on Control and State Constrained Optimal Control Problems

Tutorial on Control and State Constrained Optimal Control Problems Tutorial on Control and State Constrained Optimal Control Problems To cite this version:. blems. SADCO Summer School 211 - Optimal Control, Sep 211, London, United Kingdom. HAL Id: inria-629518

More information

Introduction to Reachability Somil Bansal Hybrid Systems Lab, UC Berkeley

Introduction to Reachability Somil Bansal Hybrid Systems Lab, UC Berkeley Introduction to Reachability Somil Bansal Hybrid Systems Lab, UC Berkeley Outline Introduction to optimal control Reachability as an optimal control problem Various shades of reachability Goal of This

More information

221A Lecture Notes Path Integral

221A Lecture Notes Path Integral A Lecture Notes Path Integral Feynman s Path Integral Formulation Feynman s formulation of quantum mechanics using the so-called path integral is arguably the most elegant. It can be stated in a single

More information

Pontryagin s maximum principle

Pontryagin s maximum principle Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin

More information

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations.

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations. FINITE DIFFERENCES Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations. 1. Introduction When a function is known explicitly, it is easy

More information

Runga-Kutta Schemes. Exact evolution over a small time step: Expand both sides in a small time increment: d(δt) F (x(t + δt),t+ δt) Ft + FF x ( t)

Runga-Kutta Schemes. Exact evolution over a small time step: Expand both sides in a small time increment: d(δt) F (x(t + δt),t+ δt) Ft + FF x ( t) Runga-Kutta Schemes Exact evolution over a small time step: x(t + t) =x(t)+ t 0 d(δt) F (x(t + δt),t+ δt) Expand both sides in a small time increment: x(t + t) =x(t)+x (t) t + 1 2 x (t)( t) 2 + 1 6 x (t)+

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations

A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations 2005 American Control Conference June 8-10, 2005. Portland, OR, USA WeB10.3 A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations Arik Melikyan, Andrei Akhmetzhanov and Naira

More information

OPTIMAL CONTROL SYSTEMS

OPTIMAL CONTROL SYSTEMS SYSTEMS MIN-MAX Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University OUTLINE MIN-MAX CONTROL Problem Definition HJB Equation Example GAME THEORY Differential Games Isaacs

More information

Control of Dynamical System

Control of Dynamical System Control of Dynamical System Yuan Gao Applied Mathematics University of Washington yuangao@uw.edu Spring 2015 1 / 21 Simplified Model for a Car To start, let s consider the simple example of controlling

More information

A Gauss Lobatto quadrature method for solving optimal control problems

A Gauss Lobatto quadrature method for solving optimal control problems ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2016, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Essentials of optimal control theory in ECON 4140

Essentials of optimal control theory in ECON 4140 Essentials of optimal control theory in ECON 4140 Things yo need to know (and a detail yo need not care abot). A few words abot dynamic optimization in general. Dynamic optimization can be thoght of as

More information

OPTIMIZATION OF BOUNDARY VALUE PROBLEMS FOR THIRD ORDER POLYHEDRAL DIFFERENTIAL INCLUSIONS

OPTIMIZATION OF BOUNDARY VALUE PROBLEMS FOR THIRD ORDER POLYHEDRAL DIFFERENTIAL INCLUSIONS Commun. Optim. Theory 18 18, Article ID 17 https://doi.org/1.395/cot.18.17 OPTIMIZATION OF BOUNDARY VALUE PROBLEMS FOR THIRD ORDER POLYHEDRAL DIFFERENTIAL INCLUSIONS SEVILAY DEMIR SAĞLAM 1,, ELIMHAN MAHMUDOV,3

More information

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark Lagrangian Duality Richard Lusby Department of Management Engineering Technical University of Denmark Today s Topics (jg Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality R Lusby (42111) Lagrangian

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

The HJB-POD approach for infinite dimensional control problems

The HJB-POD approach for infinite dimensional control problems The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,

More information

Dynamic Consistency for Stochastic Optimal Control Problems

Dynamic Consistency for Stochastic Optimal Control Problems Dynamic Consistency for Stochastic Optimal Control Problems Cadarache Summer School CEA/EDF/INRIA 2012 Pierre Carpentier Jean-Philippe Chancelier Michel De Lara SOWG June 2012 Lecture outline Introduction

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

New Identities for Weak KAM Theory

New Identities for Weak KAM Theory New Identities for Weak KAM Theory Lawrence C. Evans Department of Mathematics University of California, Berkeley Abstract This paper records for the Hamiltonian H = p + W (x) some old and new identities

More information

The Linear Quadratic Regulator

The Linear Quadratic Regulator 10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.

More information

Structurally Stable Singularities for a Nonlinear Wave Equation

Structurally Stable Singularities for a Nonlinear Wave Equation Structurally Stable Singularities for a Nonlinear Wave Equation Alberto Bressan, Tao Huang, and Fang Yu Department of Mathematics, Penn State University University Park, Pa. 1682, U.S.A. e-mails: bressan@math.psu.edu,

More information

II. An Application of Derivatives: Optimization

II. An Application of Derivatives: Optimization Anne Sibert Autumn 2013 II. An Application of Derivatives: Optimization In this section we consider an important application of derivatives: finding the minimum and maximum points. This has important applications

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

PDE Constrained Optimization selected Proofs

PDE Constrained Optimization selected Proofs PDE Constrained Optimization selected Proofs Jeff Snider jeff@snider.com George Mason University Department of Mathematical Sciences April, 214 Outline 1 Prelim 2 Thms 3.9 3.11 3 Thm 3.12 4 Thm 3.13 5

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

An Application to Growth Theory

An Application to Growth Theory An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x

More information

The relations are fairly easy to deduce (either by multiplication by matrices or geometrically), as one has. , R θ1 S θ2 = S θ1+θ 2

The relations are fairly easy to deduce (either by multiplication by matrices or geometrically), as one has. , R θ1 S θ2 = S θ1+θ 2 10 PART 1 : SOLITON EQUATIONS 4. Symmetries of the KdV equation The idea behind symmetries is that we start with the idea of symmetries of general systems of algebraic equations. We progress to the idea

More information

Numerical Methods for Constrained Optimal Control Problems

Numerical Methods for Constrained Optimal Control Problems Numerical Methods for Constrained Optimal Control Problems Hartono Hartono A Thesis submitted for the degree of Doctor of Philosophy School of Mathematics and Statistics University of Western Australia

More information