Direct optimization methods for solving a complex state constrained optimal control problem in microeconomics
|
|
- Godfrey John Horton
- 6 years ago
- Views:
Transcription
1 Direct optimization methods for solving a complex state constrained optimal control problem in microeconomics Helmut Maurer a Hans Josef Pesch b a Universität Münster, Institut für Numerische und Angewandte Mathematik Einsteinstr. 62, Münster, Germany, maurer@math.uni-muenster.de b Universität Bayreuth, Fakultät für Mathematik und Physik, Universitätstr. 3, 9544 Bayreuth, Germany, hans-josef.pesch@uni-bayreuth.de Abstract We analyze and solve a complex optimal control problem in microeconomics which has been investigated earlier in the literature. The complexity of the control problem originates from four control variables appearing linearly in the dynamics and several state inequality constraints. Thus the control problem offers a considerable challenge to the numerical analyst. We implement a hybrid optimization approach which combines two direct optimization methods. The first step consists in solving the discretized control problem by nonlinear programming methods. The second step is a refinement step where, in addition to the discretized control and state variables, the junction times between bang bang, singular and boundary subarcs are optimized. The computed solutions are shown to satisfy precisely the necessary optimality conditions of the Maximum Principle where the state constraints are directly adjoined to the Hamiltonian. Despite the complexity of the control structure, we are able to verify sufficient optimality conditions which are based on the concavity of the maximized Hamiltonian. Key words: microeconomic control model, control of stock, labor and capital, state inequality constraints, direct optimization methods, bang bang and singular control, verification of necessary and sufficient conditions PACS: 49J15, 49K15, 58E17, 65K5 1 Introduction The well known microeconomic concern model of Lesourne, Leban [1] involves only capital flows as control and state variables. Koslik, Breitner [8] and Winderl, Naumer [16] have developed an extended concern model that Preprint submitted to Elsevier 13 September 27
2 includes the production and employment sector. Besides its economic interest, the optimal control problem constitutes a considerable numerical challenge, since it comprises four control variables appearing linearly in the dynamics and several pure state inequality constraints. In [8,16], a hybrid numerical approach has been developed to determine the complicated control switching structure. First, a discretized version of the control problem is solved by nonlinear programming methods. This method yields reliable estimates for the control and state variables on a fixed grid. The second step is a refinement step, where the control and state estimates are used in the so called indirect method which requires the solution of a boundary value problem (BVP) for the state and adjoint variables. For the concern model, it is extremely difficult to set up the BVP due to the presence of pure state constraints. For this reason, authors [8,16] have substituted the active state constraints by suitable mixed control state constraints that are better tractable in the BVP formulation. The purpose of the paper is twofold. First, we discuss direct optimization methods that provide solutions which satisfy precisely the Maximum Principle for state constrained optimal control problems. The second goal is to show that the computed solution satisfies a suitable type of sufficient optimality conditions. The organisation of the paper is as follows. The concern model is presented in Section 2. In Section 3, necessary optimality conditions are discussed which are based on a Maximum Principle, where the state constraints are directly adjoined to the Hamiltonian. In Section 4, we present a hybrid optimization approach to solve the state constrained control problem. The first step is similar to that in [8,16] and differs only in that we apply the large scale optimization methods developed by Büskens [2,3] and Wächter [15]. The second step is different from the one in [8,16]. Instead of trying to solve the BVP of the Maximum Principle, we optimize simultaneously the switching and junction times between bang bang, singular and boundary arcs and the discretized control variables; cf. [5,13]. The computed control and state variables satisfy the Maximum Principle with high accuracy. Finally, in Section 5 we show that the computed solutions satisfy a suitable type of sufficient conditions. 2 Optimal control model for a concern with four control variables appearing linearly and state constraints The microeconomic control model discussed in Koslik, Breitner [8] and Winderl, Naumer [16] has six state variables and four control variables x = (S, L, Y, X, X m, X r ) IR 6, u = (S c, L c, Y c, I) IR 4, 2
3 which have the following meaning. The stock S(t) is controlled by S c (t); the number L(t) of employees is controlled by the employment rate L c (t); the capital consists of loan capital Y (t) and equity capital X(t); the control Y c (t) describes the borrowing of loan capital while the owners of the equity capital choose by means of the investment control I(t) between an investment within the concern and an alternative investment X m (t); the risk premium X r (t) serves as a reserve fund for the safety of the capital owners; the risk premium is denoted by ρ r (t). All parameters and functions appearing in the following quantities and differential equations are summarized in Table 1. The production function (output) is assumed to be of Cobb Douglas type: F (x) = F (L, Y, X) = α(x + Y ) α K L α L. (1) Then the profit (gain) of the concern is given by G(x, u, t) = 1 d(t) [ p(t)(f (x) S c) σs ω(t)l ] ρ K (t)y δ(x + Y ). (2) The discount rate d(t) is defined as the solution of the differential equation d(t) = d(t)ln(1 + i(t)), (3) where i(t) is the periodic inflation rate specified in Table 1. Note that in contrast to the presentation in [8,16], we do not treat d(t) as a state variable. In section 5, this viewpoint will allow us to apply sufficient optimality conditions. The dynamics is governed by differential equations with fixed initial values, ẋ(t) = f(x(t), u(t), t), x() = x, (4) which are given explicitly by Ṡ(t) = S c (t), S() = 1, L(t) = L c (t), L() = 3, Ẏ (t) = Y c (t), Y () = 5, Ẋ(t) = I(t) + (1 τ) [ G(x(t), u(t), t) ρ r (t)x(t) ], X() = 1, Ẋ m (t) = I(t) + (1 τ)ρ m (t)x m (t), X m () = 45, Ẋ r (t) = (1 τ)ρ r (t)x(t), X r () =. (5) 3
4 Notation Formula / Value Meaning t f 1 time horizon in years F (x) α(x + Y ) α K L α L production function (output) α 1 parameter in production function α K.35 elasticity of total capital K = X + Y α L.5 elasticity of labor k l 8 duration of economic cycle k p (t) π 2 + 2π k l t position in economic cycle ρ K (t) sin k p (t) loan interest rate ρ m (t) sin k p (t) current yield ρ r (t) ρ K (t).5 risk premium rate ρ low r (t) 2 3 (ρ K(t).8) +.2 risk premium rate for daring investors i(t) sin k p (t) inflation rate p.5 constant selling price p(t) sin( 2π k l t) variable selling price κ.8 rate of maximal borrowing ω 2. constant labor cost ω(t) 2. exp(.2 t) increasing labor cost δ.44 depreciation rate τ.5 tax rate σ.1 storage charges Table 1 Parameter and function values for the microeconomic control model. The economic process is considered on a time interval t [, t f ] with fixed time horizon t f >. The control constraints are given as box constraints, S c,min S c (t) S c,max, L c,min L c (t) L c,max, Y c,min Y c (t) Y c,max, I min I(t) I max, (6) for all t [, t f ], where we choose the following data: S c,min = 1, S c,max = 1, L c,min = 1, L c,max = 1, Y c,min = 1, Y c,max = 1, I min = 1, I max = 1. 4
5 The control constraints are written as u(t) U, where the cube U IR 4 is defined in an obvious way. The state inequality constraints are S min = 5 S(t), Y (t) κx(t), for all t [, t f ]. (7) The second state constraint imposes a maximal borrowing of loan capital. The further obvious state constraints x i (t) (i = 2,..., 6) do not become active and will therefore be omitted in the analysis of necessary conditions. Then the optimal control problem consists in determining a piecewise continuous (measurable) control u : [, t f ] IR 4 and an absolutely continuous state trajectory x : [, t f ] IR 6 that maximize the cost functional in Mayer form representing the joint capital of capital owners Φ(x(t f ), t f ) := X(t f ) + X m (t f ) + (1 τ) p(t f) d(t f ) S(t f) (8) subject to the constraints (5) (7). 3 Necessary optimality conditions A survey on necessary and sufficient conditions for state constrained optimal control problems may be found in Hartl, Sethi, Vickson [7]. The optimal control (5) (8) has the form of the control problem in Section 2 of [7], where the mixed control-state constraint is given by the simple box constraint (6). The state constraints (7) are written as h 1 (x) := S S min, h 2 (x) := κx Y. (9) We choose the direct adjoining approach described in Section 4 of [7], in which the state constraints (9) are directly adjoined to the Hamiltonian. Necessary conditions require regularity conditions for the state constraints which are associated with the order of a state constraint; cf. Section 2 in [7]. Both state constraints in (9) have order one, since h 1 1(x, u, t) = ḣ1 = Ṡ = S c, (1) h 1 2 (x, u, t) = ḣ2 = κẋ Ẏ = κ [ (I + (1 τ)(g(x, u, t) ρ r(t)x) ] Y c. In view of the gain function (2) we obtain h 1 1 u (x, u, t) = (1,,, ), h 1 2 (x, u, t) = (κ(1 τ)p(t),, 1, κ), (11) u d(t) 5
6 which implies that the regularity condition (2.11) in [7] holds, rank h1 1 / u = 2 h 1 2/ u (x, u, t). In particular, this holds along any boundary arc with h 1 (x(t)) = or h 2 (x(t)) = for t [t en, t ex ], where t en, resp., t ex denotes the entry-time, resp., the exittime of the boundary arc. On a boundary arc with h 1 (x(t)) =, the boundary control is given by h 1 1(x(t), u(t), t) = S c (t) for t [t en, t ex ]. (12) However, the boundary control on a boundary arc with h 2 (x(t)) = is not uniquely defined by the relation h 1 2(x, u, t) = h 1 2(x, I, Y c, t) =. On such a boundary arc, further relations defining the controls Y c and I can be obtained from the following necessary conditions of the Maximum Principle which is stated as Informal Theorem 4.1 in [7]. Since all control variables appear linearly in the control systems, the Hamiltonian can be written as H(x, u,, λ, t) = λf(x, u, t) = σ Sc S c + σ Lc L c + σ Yc Y c + σ I I (13) + R(x, λ, t), R(x, λ, t) = λ X (1 τ) ( p(t)α(x + Y ) α K L α L σs ω(t)l )/d(t) (14) λ X (1 τ)(ρ K (t)y + δ(x + Y ) + ρ r (t)x ) +λ Xm (1 τ)ρ m (t)x m + λ Xr (1 τ)ρ r (t)x. The adjoint variable λ = (λ S, λ L, λ Y, λ X, λ Xm, λ Xr ) IR 6 is a row-vector. The factors to the control components in the Hamiltonian are the switching functions σ Sc = λ S λ X (1 τ) p(t) d(t), σ L c = λ L, σ Yc = λ Y, σ I = λ X λ Xm. (15) Note that the switching vector σ = (σ Sc, σ Lc, σ Yc, σ I ) does not depend on the state variable x but only on the adjoint variable λ. This property will be important for the verification of sufficient conditions in Section 5. The Lagrangian is defined by adjoining the state constraints (9) directly to the Hamiltonian by a multiplier ν = (ν 1, ν 2 ) IR 2, L(x, u, λ, ν, t) = H(x, u, λ, t) + ν 1 (S 5) + ν 2 (κy X). (16) 6
7 For the economic control problem under investigation, the Maximum Principle can be rigorously justified using the techniques in Maurer [12]. Then the Informal Theorem 4.1 in Hartl et al [7] gives the following necessary conditions. Let (x(t), u(t)) be an optimal pair for the control problem (5) (8). For convenience, we drop asterisks or hats to denote an optimal solution. The notation [t] will be used to abbreviate arguments (x(t), u(t), λ(t), t). Then there exist a constant multiplier λ, a piecewise absolutely continuous adjoint function λ : [, t f ] IR 6, a piecewise continuous multiplier function ν : [, t f ] IR 2, a vector η(τ) = (η 1 (τ), η 2 (τ)) IR 2 for each junction time τ with a boundary arc, and a multiplier γ = (γ 1, γ 2 ) IR 2 with (λ, λ(t), ν(t), η(τ), γ) such that the following conditions hold for a.e. t [, t f ]: the minimum condition u(t) = arg max u U H(x(t), u, λ(t), t), (17) the adjoint equation λ(t) = L x [t], (18) the jump condition at a junction time λ(τ ) = λ(τ ) + η(τ)h x [τ], (19) the transversality condition at the terminal time λ(t f ) = λ Φ x [t f ] + γ 1 (h 1 ) x [t f ] + γ 2 (h 2 ) x [t f ], (2) and the complementarity conditions ν(t), ν(t)h[t] =, γ i, γ i h i [t f ] = (i = 1, 2). (21) The evaluation of the adjoint equations on the basis of equations (13), (14) and (16) is left to the reader. The transversality condition (2) yields in view of the cost functional (8) and the state constraint (9): λ S (t f ) = λ (1 τ)p(t f )/d(t f ) + γ 1, λ L (t f ) =, λ Y (t f ) = γ 2, λ X (t f ) = λ + γ 2 κ, λ Xm (t f ) = λ, λ Xr (t f ) =. (22) The numerical results in the next section show that the normality condition λ = 1 holds and the multipliers γ 1, γ 2 are zero, though both state constraints are active at the terminal time. The maximum condition (17) for the control 7
8 yields the following switching law for the control vector u = (u 1, u 2, u 3, u 4 ), where the switching functions σ i (t), i = 1, 2, 3, 4, are defined in (15): u i,max, if σ i (t) > u i (t) = u i,min, if σ i (t) < (23) singular, if σ i (t) = on [t en, t ex ] [, t f ] Here, t en, resp. t ex means the entry-time, resp., exit-time of a singular arc. On interior arcs with h(x(t)) <, one obtains further information on a singular control u i by differentiating the switching relation σ i (t) =, t [t en, t ex ]. We refrain from discussing this procedure in detail. On a boundary arc, the following property is noteworthy. If the component u i (t) of a boundary control lies in the interior of its control region, i.e., satisfies u i,min < u i (t) < u i,max for t en < t < t ex, then the maximum condition (17) implies σ i (t) = for t en < t < t ex. (24) Hence, the boundary control u i (t) formally behaves as a singular control. This property has been exploited in Maurer [11] to derive junction conditions for junctions between interior arcs and boundary arcs. Though the proof techniques in [11] have been developed only in the case of a scalar control, an inspection of the economic problem reveals that some junction results in [11] can be extended to the vector valued control case considered here. In particular, it follows that the adjoint variables are continuous on [, t f ], since the state constraints are of order one and the relevant control components are discontinuous at the entry times of the boundary arcs; cf. Corollary 5.2 (ii) and Theorem 5.4 in [11]. Thus the multipliers η(τ) in the jump condition (19) vanish. 4 Numerical solution and verification of necessary conditions The optimal control and state trajectory will be determined in two steps by a hybrid numerical approach where two direct optimization methods are combined. In the first step, the optimal control problem is discretized on a fixed grid. This leads to a large-scale opimization problem that may be solved by various nonlinear programming techniques. We have used two nonlinear programming implementations. Method-1: the control package NUDOCCCS by Büskens [3,4] with up to N = 1 grid points and a 4th order Runge Kutta 8
9 Fig. 1. Left: interest rates ρ K (t) (above) and ρ m (t)+ρ r (t); Right: inflation rate i(t). integration scheme; Method-2: the programming language AMPL [6] and the Interior Point Method code IPOPT developed by Wächter, Biegler [15] using up to N = 5. grid points with an EULER or HEUN integration scheme. Both methods are capable of detecting the correct control switching structure. In addition, Method (2) provides rather accurate estimates for the switching and junction times between bang bang and singular or boundary arcs. Moreover, Lagrange multipliers of the nonlinear programming problems can be identified with the values at grid points of the adjoint variables and multipliers for the state constraints. The second step is a refinement step, where the switching and junction times are determined with higher accuracy. Rather then optimizing the junction times directly, the arclengths of bang bang or singular arcs are treated as additional optimization variables. The implementation relies on a time-scaling and multiprocess control technique described in Büskens, Pesch, Winderl [5]. A simplified approach avoiding the multiprocess formulation may be found in Maurer, Büskens, Kim, Kaya [13]. Both methods and the refinement step have been tested in the diploma theses of Balzer [1] and Lang [9]. Now we present optimal control solutions for two data sets. The interest rates ρ K (t), ρ m (t) + ρ r (t) and the inflation rate i(t) given in Table 1 are depicted in Figure Solution for constant price p =.5, δ =.44 and constant wage ω = 2 Let us denote this data set by Data-1. The computed control u = (S c, L c, Y c, I) has a rather complicated switching structure with 8 bang bang and singular subarcs; cf. Table 2 with obvious notations. We obtain the following switching and junction times: t 1 = t 2 =.5, t 3 =.5666 t 4 = t 5 = t 6 = t 7 =
10 t S c L c Y c I 5 S.8X Y [., t 1 ] min max min min non-active non-active [t 1, t 2 ] min max max min non-active non-active [t 2, t 3 ] max max min active nonactive [t 3, t 4 ] min max min active non-active [t 4, t 5 ] min singular min active non-active [t 5, t 6 ] min singular singular active active [t 6, t 7 ] singular singular singular active active [t 7, t f ] max singular singular active active Table 2 Data-1: structure of optimal control with bang-bang, singular and boundary arcs Fig. 2. Stock S(t), employment L(t) and loan capital Y (t). The optimal functional value is Φ(x(t f )) = The optimal state trajectories x(t) = (S(t), L(t), Y (t), X(t), X m (t), X r (t)) are shown in Figs. 2 and 3. The control u(t) and the switching functions σ(t) are displayed in Figs The adjoint variable λ L and λ Y are shown in Figures 5, 6, while the adjoints λ S, λ X, λ Xm and the multiplier ν 2 for the state constraint κy X are displayed in Figs. 8, 9. The adjoint variable λ Xr vanishes identically. The computed initial values of the adjoints are λ S () = , λ L () = , λ Y () = , λ X () = , λ Xm () = , λ Xr () =.. The adjoint variables are continuous in [, t f ]. Hence, the multiplier η(τ) in the jump condition (19) vanishes. The terminal value is λ(t f ) = ( ,.,., 1., 1.,.) IR 6, which shows that the transversality condition (2) is satisfied with the multiplier λ = 1 and multipliers γ 1 = γ 2 =. The behavior of the switching functions (15) is in perfect agreement with the 1
11 Fig. 3. Equity capital X(t), alternative assets X m (t) and risk premium X r (t) Fig. 4. Stock control S c (t) and switching function σ Sc (t) Fig. 5. Employment control L c (t) and switching function σ Lc (t) = λ L (t) Fig. 6. Loan capital control Y c (t) and switching function σ Yc (t) = λ Y (t). control law (23) and the control structure in Table 2, since we have <, t < t 2 σ Sc (t) =, t 2 t 1, σ <, t < t 5 I(t) =, t 5 t t, f >, t < t 3 <, t < t 1 <, t 3 < t < t 6 σ Lc (t), σ Yc (t) >, t 1 < t < t 4. =, t 6 t t 7 =, t 4 t t f >, t 7 t t 7 11
12 Fig. 7. Investment control I(t) and switching function σ I (t) Fig. 8. Adjoint variables λ S (t), λ X (t) Fig. 9. Adjoint λ Xm (t) and multiplier ν 2 (t) for constraint.8 Y (t) X(t) Fig. 1. Zoom into the switching functions σ Lc (t) and σ Yc (t). In addition, the switching functions satisfy the so-called strict bang-bang property. Namely, σ k (t j ) holds at any switching time t j between bang-bang arcs of the control component u k. Figure 1 zooms into the switching function σ Lc (t) and σ Yc (t) to demonstrate in greater detail that (a) the control L c (t) has a junction of a singular arc with a bang-bang arc at t 7 = , and (b) the control Y c (t) switches between bang-bang arcs at t 1 = and has a junction with a singular arc at t 4 =
13 t S c L c Y c I 5 S.8X Y [., t 1 ] min max min max non-active non-active [t 1, t 2 ] min max max min non-active non-active [t 2, t 3 ] min max singular singular non-active active [t 3, t 4 ] max singular singular active active [t 4, t 5 ] min singular singular active active [t 5, t f ] max singular singular active active Table 3 Data-2 : structure of optimal control with bang-bang, singular and boundary arcs Fig. 11. Data-2 : Stock S(t), employment L(t) and loan capital Y (t). 4.2 Solution for variable price p(t) = sin(2πt/8), depreciation rate δ =.322 and increasing wage ω(t) = 2 exp(.2 t) We choose a data set denoted by Data-2 which is significantly different from that in Section 4.1 by considering the variable price p(t) =.5+.1 sin(2πt/8), depreciation rate δ =.322 and increasing wage ω(t) = 2 exp(.2 t); cf. [1]. Then the computed control u = (S c, L c, Y c, I) is a combination of 6 bang bang and singular subarcs that are described in Table 3. We obtain the switching and junction times t 1 =.1661, t 2 =.3842, t 3 =.5, t 4 = , t 5 = and the optimal functional value Φ(x(t f )) = The optimal state trajectories x(t) = (S(t), L(t), Y (t), X(t), X m (t), X r (t)) are shown in Figs. 11 and 12, while Figs depict the optimal control components jointly with the associated switching functions. The adjoint variables λ IR 6 and the multiplier ν 2 for the state constraint κy X are displayed in Figs. 14, 15, 17, 18. Again, we have λ Xr (t) 13
14 Fig. 12. Equity capital X(t), alternative assets X m (t) and risk premium X r (t). 5e e-5-1e Fig. 13. Stock control S c (t) and switching function σ Sc (t) Fig. 14. Employment control L c (t) and switching function σ Lc (t) = λ L (t) Fig. 15. loan capital control Y c (t) and switching function σ Yc (t) = λ Y (t). in [, t f ]. The computed initial values of the adjoints are λ S () = , λ L () = , λ Y () = , λ X () = , λ Xm () = , λ Xr () =., while the terminal value is λ(t f ) = ( ,.,., 1., 1.,.) IR 6. The switching functions (15) obey the control laws (23), resp., the control 14
15 Fig. 16. Investment control I(t) and switching function σ I (t) Fig. 17. adjoint variables λ S (t), λ X (t) Fig. 18. Adjoint variable λ Xm (t) and multiplier ν 2 (t) for constraint.8y (t) X(t). structure in Table 3 and also satisfy the strict bang-bang property at switching times between bang-bang arcs. 5 Verification of sufficient optimality conditions We are going to show that the sufficient optimality conditions of Arrow type in Hartl, Sethi, Vickson [7], Theorem 8.2, hold for both controls presented in section 4.1 and 4.2. The first assumption in Theorem 8.2 of [7] requires that the necessary conditions be satisfied with a multiplier λ = 1 in the transversality condition (22). This property holds as stated earlier in Section 4; cf. Figs. 5, 6, 8, 9 and Figs. 14, 15, 17, 18. The function Φ(x, t f ) = X + X m + S(1 τ)p(t f )/d(t f ) 15
16 defining the cost functional (8) is linear in x and, hence, concave in x. Note that we did not treat the discount rate d(t) defined by equation (3) as an auxiliary state variable x 7 as it was done in [8,16]. This additional state variable would destroy the concavity of the function Φ(x, t f ). The crucial condition then is the property that the maximized Hamiltonian H (x, λ, t) = max u U H(x, u, λ, t) is concave for all (λ(t), t), t [, t f ]. It can readily be seen from (13) and (14) that the maximized Hamiltonian is given by H (x, λ(t), t) = λ X (t)(1 τ) p(t)f (x) + R (t), (25) where F (x) = F (L, Y, X) = α(x + Y ) α K L α L is the Cobb Douglas production function (1) and R (t) does not depend on the state variable x. The production function F (x) = F (L, Y, X) is concave for L >, Y >, X > in view of the assumption < α, α K, α L and α K + α L < 1. This follows from the fact that the Hessian DxxF 2 (x) is negative semi definite since four eigenvalues are zero and two eigenvalues are negative. Moreover, Figs. 8 and 17 show that λ X (t) > holds for t [, t f ]. Since (1 τ)p(t) >, we finally conclude that the maxized Hamiltonian H (x, λ(t), t) in (25) is concave in x, which confirms that the computed controls are optimal. 6 Comparison and Conclusion The complicated control structure for Data-1 shows that many switchings between bang-bang arcs occur in the very beginning of the planning period, more precisely for t t 6 =.671. The reason for such multiple switchings may be that the initial values of loan and equity capital are not chosen properly. For the largest part of the planning period, namely for t 6 =.671 t t 7 = 8.94, all controls are singular and take values in the interior of the control region. Towards the end of the planning period, the employment rate L c (t) is increasing and takes its maximum value on the final arc [t 7, t f ]. The optimal solution for Data-2 is significantly different from the Data-1 solution as can be clearly seen in the behavior of the employment control L c (t). This is due to the variable price function p(t) which is increasing in the periods [, 2] and [6, 1], but decreasing in the period [2, 6]. The employment control L c (t) is bang-bang and takes its minimum negative value L c,min = 1, i.e. adopts a maximal dismissal rate, in the period [t 4, t 5 ] = [2.85, 6.2] before it switches to maximal hiring in the remaining planning period. When the price p(t) is decresaing, the loan capital control Y c (t) and the investment control I(t) have significantly smaller values than those for Data-1. 16
17 Due to the complexity of the control model, the complicated control structure can not be determined from a detailed discussion of the Maximum Principle alone. We have presented a hybrid numerical approach consisting of two consecutive direct optimization methods which yield control and state variables as well as junction times between bang-bang and singular arcs. Moreover, adjoint variables and multipliers associated with state constraints can be identified with Lagrange multipliers of the optimization problems. This allows us to verify necessary optimality conditions a posteriori. Using this approach, optimal solutions can be computed for various other data scenarios in the microeconomic control problem, e.g., for the risk premium rate ρ low r (t) for daring investors, etc. In all cases, the computed solution satisfies sufficient optimality conditions, since the maximized Hamiltonian turns out to be a concave function of the state variable. Acknowledgement: We are grateful to Nadja Balzer [1] and Matthias Lang [9] for numerical assistance with solving the control problems in Section 4.. References [1] Balzer, N., Optimale Steuerung ökonmischer Prozesse am Beispiel eines komplexen Unternehmensmodells mit bang bang, singulären Steueerungen und Zustandsbeschränkungen, Diploma Thesis, Institut für Numerische und Angewandte Mathematik, Universität Münster, 26. [2] Büskens, C., Direkte Optimierungsmethoden und Sensitivitätsanalyse für optimale Steuerprozesse mit Steuer und Zustands Beschränkungen. Dissertation, Institut für Numerische und Angewandte Mathematik, Universität Münster, [3] Büskens, C., and Maurer, H., SQP methods for solving optimal control problems with control and state constraints: adjoint variables, sensitivity analysis and real time control, J. of Computational and Applied Mathematics 12, (2). [4] Büskens, C., and Maurer, H., Sensitivity analysis and real time control of parametric optimal control problems using nonlinear programming methods, in: Online Optimization of Large Scale Systems (Grötschel, M., Krumke, S.O., and Rambau, J., eds.), pp , Springer Verlag, Berlin, 21. [5] Büskens, C., Pesch, H.J., and Winderl, S., Real time solutions of bang bang and singular optimal control problems, in: Online Optimization of Large Scale Systems (Grötschel, M., Krumke, S.O., and Rambau, J., eds.), pp , Springer Verlag, Berlin, 21. [6] Fourer, R., Gay, D.M., and Kernighan, B.W., AMPL: A Modeling Language for Mathematical Programming, Duxbury Press, Brooks/Cole Publishing Company,
18 [7] Hartl, R.F., Sethi, S.P., and Vickson, R.G., A survey of the maximum principles for optimal control problems with state constraints, SIAM Review 37, (1995). [8] Koslik, B., and Breitner, M.H., An optimal control problem in economics with four linear controls, J. Optimization Theory and Applications 94, (1997). [9] Lang, M., Schaltstrukturerkennung und Schaltpunkt Optimierung bei Optimalsteuerungsproblemen mit linear eingehenden Steuerungen am Beispiel eines Steuerungsproblems aus der Mikroökonomie, Diploma Thesis, Fakultät für Mathematik und Physik, Universität Bayreuth, 26. [1] Lesourne, J., and Leban, R., La substitution capital travail au cours de la croissance de l entreprise, Revue d Economie Politique 4, (1978). [11] Maurer, H., On optimal control problems with bounded state variables and control appearing linearly, SIAM J. Control and Optimization 15, (1977). [12] Maurer, H., On the minimum principle for optimal control problems with state constraints, Schriftenreihe des Rechenzentrums, Nr. 41, Universität Münster, [13] Maurer, H., Büskens, C., Kim, J.-H.R., Kaya, Y.C., Optimization methods for the verification of second-order sufficient conditions for bang-bang controls, Optimal Control Methods and Applications 26, (25). [14] Maurer, H., Kim, J.-H.R., and Vossen, G., On a state constrained control problem in optimal production and maintenance, in: Optimal Control and Dynamic Games, Applications in Finance, Management Science and Economics, Deissenberg, C. and Hartl, R.F., eds., pp , Springer Verlag, 25. [15] Wächter, A., and Biegler, L.T., On the implementation of an interior point filterline search algorithm for large scale nonlinear programming, Mathematical Programming 16, (26). [16] Winderl, S, and Naumer, B., On a state constrained control problem in economics with four linear controls, Schwerpunktprogramm der Deutschen Forschungsgemeinschaft Echtzeitoptimierung Großer Systeme, Report 9, 2. 18
Theory and Applications of Constrained Optimal Control Proble
Theory and Applications of Constrained Optimal Control Problems with Delays PART 1 : Mixed Control State Constraints Helmut Maurer 1, Laurenz Göllmann 2 1 Institut für Numerische und Angewandte Mathematik,
More informationProject Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC)
Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC) Mahdi Ghazaei, Meike Stemmann Automatic Control LTH Lund University Problem Formulation Find the maximum of the
More informationTutorial on Control and State Constrained Optimal Control Problems
Tutorial on Control and State Constrained Optimal Control Problems To cite this version:. blems. SADCO Summer School 211 - Optimal Control, Sep 211, London, United Kingdom. HAL Id: inria-629518
More informationTutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints
Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer
More informationBang-Bang and Singular Controls in Optimal Control Problems with Partial Differential Equations
Bang-Bang and Singular Controls in Optimal Control Problems with Partial Differential Equations Hans Josef Pesch, Simon Bechmann, and Jan-Eric Wurst Abstract This paper focuses on bang-bang and singular
More informationNumerical Optimal Control Overview. Moritz Diehl
Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize
More informationNumerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients
Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients SADCO Summer School and Workshop on Optimal and Model Predictive Control OMPC 2013, Bayreuth
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationCHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73
CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationTheory and Applications of Optimal Control Problems with Time-Delays
Theory and Applications of Optimal Control Problems with Time-Delays University of Münster Institute of Computational and Applied Mathematics South Pacific Optimization Meeting (SPOM) Newcastle, CARMA,
More informationOptimal Control of a Global Model of Climate Change with Adaptation and Mitigation
WP/8/27 Optimal Control of a Global Model of Climate Change with Adaptation and Mitigation by Manoj Atolia, Prakash Loungani, Helmut Maurer, and Willi Semmler IMF Working Papers describe research in progress
More informationWe now impose the boundary conditions in (6.11) and (6.12) and solve for a 1 and a 2 as follows: m 1 e 2m 1T m 2 e (m 1+m 2 )T. (6.
158 6. Applications to Production And Inventory For ease of expressing a 1 and a 2, let us define two constants b 1 = I 0 Q(0), (6.13) b 2 = ˆP Q(T) S(T). (6.14) We now impose the boundary conditions in
More informationCHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/53
CHAPTER 7 APPLICATIONS TO MARKETING Chapter 7 p. 1/53 APPLICATIONS TO MARKETING State Equation: Rate of sales expressed in terms of advertising, which is a control variable Objective: Profit maximization
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationOptimization techniques for autonomous underwater vehicles: a practical point of view
Optimization techniques for autonomous underwater vehicles: a practical point of view M. Chyba, T. Haberkorn Department of Mathematics University of Hawaii, Honolulu, HI 96822 Email: mchyba@math.hawaii.edu,
More informationCHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67
CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal
More informationSecond Order Sufficient Optimality Conditions for a Control Problem with Continuous and Bang-Bang Control Components: Riccati Approach
Second Order Sufficient Optimality Conditions for a Control Problem with Continuous and Bang-Bang Control Components: Riccati Approach Nikolai P. Osmolovskii and Helmut Maurer Abstract Second order sufficient
More informationHigh-Speed Switch-On of a Semiconductor Gas Discharge Image Converter Using Optimal Control Methods
Journal of Computational Physics 170, 395 414 (2001) doi:10.1006/jcph.2001.6741, available online at http://www.idealibrary.com on High-Speed Switch-On of a Semiconductor Gas Discharge Image Converter
More informationhave demonstrated that an augmented Lagrangian techniques combined with a SQP approach lead to first order conditions and provide an efficient numeric
Optimization Techniques for Solving Elliptic Control Problems with Control and State Constraints. Part : Boundary Control HELMUT MAURER Westfalische Wilhelms-Universitat Munster, Institut fur Numerische
More informationRamsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path
Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path Ryoji Ohdoi Dept. of Industrial Engineering and Economics, Tokyo Tech This lecture note is mainly based on Ch. 8 of Acemoglu
More informationproblem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming
1. Endogenous Growth with Human Capital Consider the following endogenous growth model with both physical capital (k (t)) and human capital (h (t)) in continuous time. The representative household solves
More informationApplications of Bang-Bang and Singular Control Problems in B. Problems in Biology and Biomedicine
Applications of Bang-Bang and Singular Control Problems in Biology and Biomedicine University of Münster Institute of Computational and Applied Mathematics South Pacific Continuous Optimization Meeting
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationContinuation from a flat to a round Earth model in the coplanar orbit transfer problem
Continuation from a flat to a round Earth model in the coplanar orbit transfer problem M. Cerf 1, T. Haberkorn, Emmanuel Trélat 1 1 EADS Astrium, les Mureaux 2 MAPMO, Université d Orléans First Industrial
More informationLecture 6: Discrete-Time Dynamic Optimization
Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,
More informationJoint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018
EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,
More informationPriority Program 1253
Deutsche Forschungsgemeinschaft Priority Program 1253 Optimization with Partial Differential Equations Klaus Deckelnick and Michael Hinze A note on the approximation of elliptic control problems with bang-bang
More informationZentrum für Technomathematik
Zentrum für Technomathematik Fachbereich 3 Mathematik und Informatik Computational Parametric Sensitivity Analysis of Perturbed PDE Optimal Control Problems with State and Control Constraints Christof
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture
More informationA Gauss Lobatto quadrature method for solving optimal control problems
ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes
More informationDirect Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.
Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple
More informationB Abstract. A very fast numerical method is developed for the computation of neighboring optimum feedback controls. This method is applicable to a gen
A A New General Guidance Method in Constrained Optimal Control Part 1: Numerical Method 1 B. KUGELMANN AND H. J. PESCH 3 Communicated by D. G. Hull 1 This research was supported in part by the Deutsche
More informationStrong and Weak Augmentability in Calculus of Variations
Strong and Weak Augmentability in Calculus of Variations JAVIER F ROSENBLUETH National Autonomous University of Mexico Applied Mathematics and Systems Research Institute Apartado Postal 20-126, Mexico
More informationNOTES ON CALCULUS OF VARIATIONS. September 13, 2012
NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,
More informationUNIVERSITY OF VIENNA
WORKING PAPERS Cycles and chaos in the one-sector growth model with elastic labor supply Gerhard Sorger May 2015 Working Paper No: 1505 DEPARTMENT OF ECONOMICS UNIVERSITY OF VIENNA All our working papers
More informationTheory and Applications of Optimal Control Problems with Tim
Theory and Applications of Optimal Control Problems with Time Delays University of Münster Applied Mathematics: Institute of Analysis and Numerics Université Pierre et Marie Curie, Paris, March 1, 217
More informationECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko
ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption
More informationMarkov Perfect Equilibria in the Ramsey Model
Markov Perfect Equilibria in the Ramsey Model Paul Pichler and Gerhard Sorger This Version: February 2006 Abstract We study the Ramsey (1928) model under the assumption that households act strategically.
More informationEconomic Growth: Lecture 9, Neoclassical Endogenous Growth
14.452 Economic Growth: Lecture 9, Neoclassical Endogenous Growth Daron Acemoglu MIT November 28, 2017. Daron Acemoglu (MIT) Economic Growth Lecture 9 November 28, 2017. 1 / 41 First-Generation Models
More informationIntroduction to Continuous-Time Dynamic Optimization: Optimal Control Theory
Econ 85/Chatterjee Introduction to Continuous-ime Dynamic Optimization: Optimal Control heory 1 States and Controls he concept of a state in mathematical modeling typically refers to a specification of
More informationProduction and Relative Consumption
Mean Field Growth Modeling with Cobb-Douglas Production and Relative Consumption School of Mathematics and Statistics Carleton University Ottawa, Canada Mean Field Games and Related Topics III Henri Poincaré
More informationSuggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004
Suggested Solutions to Homework #3 Econ 5b (Part I), Spring 2004. Consider an exchange economy with two (types of) consumers. Type-A consumers comprise fraction λ of the economy s population and type-b
More informationNew Notes on the Solow Growth Model
New Notes on the Solow Growth Model Roberto Chang September 2009 1 The Model The firstingredientofadynamicmodelisthedescriptionofthetimehorizon. In the original Solow model, time is continuous and the
More information(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming
1. Government Purchases and Endogenous Growth Consider the following endogenous growth model with government purchases (G) in continuous time. Government purchases enhance production, and the production
More informationVolume 30, Issue 3. Ramsey Fiscal Policy and Endogenous Growth: A Comment. Jenn-hong Tang Department of Economics, National Tsing-Hua University
Volume 30, Issue 3 Ramsey Fiscal Policy and Endogenous Growth: A Comment Jenn-hong Tang Department of Economics, National Tsing-Hua University Abstract Recently, Park (2009, Economic Theory 39, 377--398)
More informationThe Growth Model in Continuous Time (Ramsey Model)
The Growth Model in Continuous Time (Ramsey Model) Prof. Lutz Hendricks Econ720 September 27, 2017 1 / 32 The Growth Model in Continuous Time We add optimizing households to the Solow model. We first study
More informationSECOND ORDER SUFFICIENT CONDITIONS FOR TIME-OPTIMAL BANG-BANG CONTROL
SIAM J. CONTROL OPTIM. Vol. 42, No. 6, pp. 2239 2263 c 24 Society for Industrial and Applied Mathematics SECOND ORDER SUFFICIENT CONDITIONS FOR TIME-OPTIMAL BANG-BANG CONTROL HELMUT MAURER AND NIKOLAI
More informationMonetary Economics: Solutions Problem Set 1
Monetary Economics: Solutions Problem Set 1 December 14, 2006 Exercise 1 A Households Households maximise their intertemporal utility function by optimally choosing consumption, savings, and the mix of
More informationEconomic Growth: Lectures 5-7, Neoclassical Growth
14.452 Economic Growth: Lectures 5-7, Neoclassical Growth Daron Acemoglu MIT November 7, 9 and 14, 2017. Daron Acemoglu (MIT) Economic Growth Lectures 5-7 November 7, 9 and 14, 2017. 1 / 83 Introduction
More informationNumerical Methods for Embedded Optimization and Optimal Control. Exercises
Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to
More informationEndogenous Growth Theory
Endogenous Growth Theory Lecture Notes for the winter term 2010/2011 Ingrid Ott Tim Deeken October 21st, 2010 CHAIR IN ECONOMIC POLICY KIT University of the State of Baden-Wuerttemberg and National Laboratory
More informationOptimal Control and Applications
V. M. Becerra - Session 1-2nd AstroNet-II Summer School Optimal Control and Applications Session 1 2nd AstroNet-II Summer School Victor M. Becerra School of Systems Engineering University of Reading 5-6
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationOPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS
OPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS (SECOND EDITION, 2000) Suresh P. Sethi Gerald. L. Thompson Springer Chapter 1 p. 1/37 CHAPTER 1 WHAT IS OPTIMAL CONTROL THEORY?
More informationMinimizing Tumor Volume for a Mathematical Model of Anti-Angiogenesis with Linear Pharmacokinetics
Minimizing Tumor Volume for a Mathematical Model of Anti-Angiogenesis with Linear Pharmacokinetics Urszula Ledzewicz 1, Helmut Maurer, and Heinz Schättler 3 1 Dept. of Mathematics and Statistics, Southern
More informationLecture 10: Singular Perturbations and Averaging 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 10: Singular Perturbations and
More informationLeveraging Dynamical System Theory to Incorporate Design Constraints in Multidisciplinary Design
Leveraging Dynamical System Theory to Incorporate Design Constraints in Multidisciplinary Design Bradley A. Steinfeldt and Robert D. Braun Georgia Institute of Technology, Atlanta, GA 3332-15 This work
More informationWhere is matrix multiplication locally open?
Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?
More informationDifferential Games, Distributed Systems, and Impulse Control
Chapter 12 Differential Games, Distributed Systems, and Impulse Control In previous chapters, we were mainly concerned with the optimal control problems formulated in Chapters 3 and 4 and their applications
More informationCHAPTER 9 MAINTENANCE AND REPLACEMENT. Chapter9 p. 1/66
CHAPTER 9 MAINTENANCE AND REPLACEMENT Chapter9 p. 1/66 MAINTENANCE AND REPLACEMENT The problem of determining the lifetime of an asset or an activity simultaneously with its management during that lifetime
More informationRegularity and approximations of generalized equations; applications in optimal control
SWM ORCOS Operations Research and Control Systems Regularity and approximations of generalized equations; applications in optimal control Vladimir M. Veliov (Based on joint works with A. Dontchev, M. Krastanov,
More informationPontryagin s Minimum Principle 1
ECE 680 Fall 2013 Pontryagin s Minimum Principle 1 In this handout, we provide a derivation of the minimum principle of Pontryagin, which is a generalization of the Euler-Lagrange equations that also includes
More informationLegendre Pseudospectral Approximations of Optimal Control Problems
Legendre Pseudospectral Approximations of Optimal Control Problems I. Michael Ross 1 and Fariba Fahroo 1 Department of Aeronautics and Astronautics, Code AA/Ro, Naval Postgraduate School, Monterey, CA
More information1 Introduction. Erik J. Balder, Mathematical Institute, University of Utrecht, Netherlands
Comments on Chapters 20 and 21 (Calculus of Variations and Optimal Control Theory) of Introduction to Mathematical Economics (third edition, 2008) by E.T. Dowling Erik J. Balder, Mathematical Institute,
More informationOptimal control theory with applications to resource and environmental economics
Optimal control theory with applications to resource and environmental economics Michael Hoel, August 10, 2015 (Preliminary and incomplete) 1 Introduction This note gives a brief, non-rigorous sketch of
More informationFINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS. BORIS MORDUKHOVICH Wayne State University, USA
FINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS BORIS MORDUKHOVICH Wayne State University, USA International Workshop Optimization without Borders Tribute to Yurii Nesterov
More informationarxiv: v1 [math.pr] 24 Sep 2018
A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b
More informationMacroeconomics IV Problem Set I
14.454 - Macroeconomics IV Problem Set I 04/02/2011 Due: Monday 4/11/2011 1 Question 1 - Kocherlakota (2000) Take an economy with a representative, in nitely-lived consumer. The consumer owns a technology
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More information14.461: Technological Change, Lecture 1
14.461: Technological Change, Lecture 1 Daron Acemoglu MIT September 8, 2016. Daron Acemoglu (MIT) Technological Change, Lecture 1 September 8, 2016. 1 / 60 Endogenous Technological Change Expanding Variety
More informationHamiltonian Systems of Negative Curvature are Hyperbolic
Hamiltonian Systems of Negative Curvature are Hyperbolic A. A. Agrachev N. N. Chtcherbakova Abstract The curvature and the reduced curvature are basic differential invariants of the pair: Hamiltonian system,
More informationModern Portfolio Theory with Homogeneous Risk Measures
Modern Portfolio Theory with Homogeneous Risk Measures Dirk Tasche Zentrum Mathematik Technische Universität München http://www.ma.tum.de/stat/ Rotterdam February 8, 2001 Abstract The Modern Portfolio
More informationOptimal Control of Differential Equations with Pure State Constraints
University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 8-2013 Optimal Control of Differential Equations with Pure State Constraints Steven Lee
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationDYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION
DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal
More informationarxiv: v1 [math.oc] 1 Jan 2019
Optimal Control of Double Integrator with Minimum Total Variation C. Yalçın Kaya January 4, 209 arxiv:90.0049v [math.oc] Jan 209 Abstract We study the well-known minimum-energy control of double integrator,
More informationSolution by the Maximum Principle
292 11. Economic Applications per capita variables so that it is formally similar to the previous model. The introduction of the per capita variables makes it possible to treat the infinite horizon version
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationDynamic Programming with Hermite Interpolation
Dynamic Programming with Hermite Interpolation Yongyang Cai Hoover Institution, 424 Galvez Mall, Stanford University, Stanford, CA, 94305 Kenneth L. Judd Hoover Institution, 424 Galvez Mall, Stanford University,
More informationChapter 15. Several of the applications of constrained optimization presented in Chapter 11 are. Dynamic Optimization
Chapter 15 Dynamic Optimization Several of the applications of constrained optimization presented in Chapter 11 are two-period discrete-time optimization problems. he objective function in these intertemporal
More information1 The Basic RBC Model
IHS 2016, Macroeconomics III Michael Reiter Ch. 1: Notes on RBC Model 1 1 The Basic RBC Model 1.1 Description of Model Variables y z k L c I w r output level of technology (exogenous) capital at end of
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationProblem 1 (30 points)
Problem (30 points) Prof. Robert King Consider an economy in which there is one period and there are many, identical households. Each household derives utility from consumption (c), leisure (l) and a public
More informationOptimality Conditions
Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.
More informationConstrained maxima and Lagrangean saddlepoints
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 10: Constrained maxima and Lagrangean saddlepoints 10.1 An alternative As an application
More informationHuman Capital and Economic Growth
Human Capital and Economic Growth Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 81 Human Capital and Economic Growth Human capital: all the attributes of workers
More informationOn Stopping Times and Impulse Control with Constraint
On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:
More informationChapter 12 Ramsey Cass Koopmans model
Chapter 12 Ramsey Cass Koopmans model O. Afonso, P. B. Vasconcelos Computational Economics: a concise introduction O. Afonso, P. B. Vasconcelos Computational Economics 1 / 33 Overview 1 Introduction 2
More informationFormula Sheet for Optimal Control
Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationFIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow
FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint
More informationNotes on Control Theory
Notes on Control Theory max t 1 f t, x t, u t dt # ẋ g t, x t, u t # t 0, t 1, x t 0 x 0 fixed, t 1 can be. x t 1 maybefreeorfixed The choice variable is a function u t which is piecewise continuous, that
More informationRobustness in Stochastic Programs with Risk Constraints
Robustness in Stochastic Programs with Risk Constraints Dept. of Probability and Mathematical Statistics, Faculty of Mathematics and Physics Charles University, Prague, Czech Republic www.karlin.mff.cuni.cz/~kopa
More informationAssumption 5. The technology is represented by a production function, F : R 3 + R +, F (K t, N t, A t )
6. Economic growth Let us recall the main facts on growth examined in the first chapter and add some additional ones. (1) Real output (per-worker) roughly grows at a constant rate (i.e. labor productivity
More informationThe Euler Method for Linear Control Systems Revisited
The Euler Method for Linear Control Systems Revisited Josef L. Haunschmied, Alain Pietrus, and Vladimir M. Veliov Research Report 2013-02 March 2013 Operations Research and Control Systems Institute of
More informationEconomic Growth (Continued) The Ramsey-Cass-Koopmans Model. 1 Literature. Ramsey (1928) Cass (1965) and Koopmans (1965) 2 Households (Preferences)
III C Economic Growth (Continued) The Ramsey-Cass-Koopmans Model 1 Literature Ramsey (1928) Cass (1965) and Koopmans (1965) 2 Households (Preferences) Population growth: L(0) = 1, L(t) = e nt (n > 0 is
More informationOn Some Optimal Control Problems for Electric Circuits
On Some Optimal Control Problems for Electric Circuits Kristof Altmann, Simon Stingelin, and Fredi Tröltzsch October 12, 21 Abstract Some optimal control problems for linear and nonlinear ordinary differential
More informationON THE SUSTAINABLE PROGRAM IN SOLOW S MODEL. 1. Introduction
ON THE SUSTAINABLE PROGRAM IN SOLOW S MODEL CEES WITHAGEN, GEIR B. ASHEIM, AND WOLFGANG BUCHHOLZ Abstract. We show that our general result (Withagen and Asheim [8]) on the converse of Hartwick s rule also
More informationAdvanced Economic Growth: Lecture 3, Review of Endogenous Growth: Schumpeterian Models
Advanced Economic Growth: Lecture 3, Review of Endogenous Growth: Schumpeterian Models Daron Acemoglu MIT September 12, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 3 September 12, 2007 1 / 40 Introduction
More information