Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC)

Similar documents
CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

Deterministic Dynamic Programming

Advanced Mechatronics Engineering

Direct optimization methods for solving a complex state constrained optimal control problem in microeconomics

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints

Mathematical Economics. Lecture Notes (in extracts)

Theory and Applications of Optimal Control Problems with Time-Delays

ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67

Formula Sheet for Optimal Control

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

Theory and Applications of Constrained Optimal Control Proble

Optimal control theory with applications to resource and environmental economics

Lecture Note 13:Continuous Time Switched Optimal Control: Embedding Principle and Numerical Algorithms

Problem 1 Cost of an Infinite Horizon LQR

Numerical Optimal Control Overview. Moritz Diehl

8. Constrained Optimization

Differential Games, Distributed Systems, and Impulse Control

OPTIMAL CONTROL SYSTEMS

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game

Convex Optimization M2

Pontryagin s maximum principle

Direct Optimal Control and Costate Estimation Using Least Square Method

Random and Deterministic perturbations of dynamical systems. Leonid Koralov

Tonelli Full-Regularity in the Calculus of Variations and Optimal Control

Numerical Methods for Constrained Optimal Control Problems

Tutorial on Control and State Constrained Optimal Control Problems

Optimal Control of Differential Equations with Pure State Constraints

ON THE SUSTAINABLE PROGRAM IN SOLOW S MODEL. 1. Introduction

Weak convergence and large deviation theory

Learning parameters in ODEs

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

Optimal Control and Applications

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

Legendre Pseudospectral Approximations of Optimal Control Problems

UNIVERSITY OF OSLO. Please make sure that your copy of the problem set is complete before you attempt to answer anything.

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

A Gauss Lobatto quadrature method for solving optimal control problems

Free Convolution with A Free Multiplicative Analogue of The Normal Distribution

Optimal control theory with applications to resource and environmental economics

Input to state Stability

OPTIMAL CONTROL CHAPTER INTRODUCTION

A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations

Conservation Laws and Finite Volume Methods

Deterministic Optimal Control

An efficient approach to stochastic optimal control. Bert Kappen SNN Radboud University Nijmegen the Netherlands

Lecture 10: Singular Perturbations and Averaging 1

Non-Differentiable Embedding of Lagrangian structures

Disturbance Decoupling Problem

Trajectory Smoothing as a Linear Optimal Control Problem

Mechanical Systems II. Method of Lagrange Multipliers

Deterministic Optimal Control

Strong and Weak Augmentability in Calculus of Variations

Module 05 Introduction to Optimal Control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

LECTURES ON OPTIMAL CONTROL THEORY

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations

Mathematical Economics: Lecture 16

Optimality Conditions for Constrained Optimization

LMI Methods in Optimal and Robust Control

arxiv: v1 [math.oc] 1 Jan 2019

Are thermodynamical systems port-hamiltonian?

minimize x subject to (x 2)(x 4) u,

Optimal Control and Its Application to the Life- Cycle Savings Problem

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Static and Dynamic Optimization (42111)

Optimal Control Theory - Module 3 - Maximum Principle

Game Theory Extra Lecture 1 (BoB)

Lecture 6. Foundations of LMIs in System and Control Theory

Principles of Optimal Control Spring 2008

Nonlinear Control Systems

Homework Solution # 3

Hamiltonian Systems of Negative Curvature are Hyperbolic

On continuous time contract theory

Pontryagin s Minimum Principle 1

A SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS

5. Duality. Lagrangian

Notes on Control Theory

Leveraging Dynamical System Theory to Incorporate Design Constraints in Multidisciplinary Design

c 2018 Society for Industrial and Applied Mathematics

Lecture Note 7: Switching Stabilization via Control-Lyapunov Function

Kalman Filter and Parameter Identification. Florian Herzog

Optimal Control of Autonomous Vehicles Approaching A Traffic Light

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Cyber-Physical Systems Modeling and Simulation of Continuous Systems

Lecture 1: Overview, Hamiltonians and Phase Diagrams. ECO 521: Advanced Macroeconomics I. Benjamin Moll. Princeton University, Fall

Complicated behavior of dynamical systems. Mathematical methods and computer experiments.

Lecture: Duality.

Principles of Optimal Control Spring 2008

Nonlinear Control Lecture # 14 Input-Output Stability. Nonlinear Control

Continuation from a flat to a round Earth model in the coplanar orbit transfer problem

Finite Element Clifford Algebra: A New Toolkit for Evolution Problems

Exponential Integrators

Lecture 11 Optimal Control

MEAN FIELD GAMES WITH MAJOR AND MINOR PLAYERS

Inexact Newton Methods and Nonlinear Constrained Optimization

LINEAR-CONVEX CONTROL AND DUALITY

Transcription:

Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC) Mahdi Ghazaei, Meike Stemmann Automatic Control LTH Lund University

Problem Formulation Find the maximum of the following cost functional subject to the contraints on states and control signals: J(u) = T 0 F (x(t), u(t), t)dt + S(x(T ), T ) ẋ(t) = f(x(t), t), x(0) = x 0 g(x(t), u(t), t) 0 h(x(t), t) 0 a(x(t ), T ) 0 b(x(t ), T ) = 0

Outline State Constraints Direct Adjoining Approach Indirect Adjoining Approach Example: Double Integrator Numerical Solution by Interior-Point Method Direct Approach Conclusion

Constraint Qualification for terminal constraints: rank for mixed constraints: [ a/ x ] diag(a) b/ x 0 = l + l a : E n E E l, b : E n E E l rank[ g/ u diag(g)] = s s : E n E m E E s

Order of Pure State Constraints h 0 (x, u, t) = h = h(x, t) h 1 (x, u, t) = ḣ = h x(x, t)f(x, u, t) + h t (x, t) h 2 (x, u, t) = ḣ1 = h 1 x(x, t)f(x, u, t) + h 1 t (x, t). h p (x, u, t) = ḣp 1 = h p 1 x (x, t)f(x, u, t) + h p 1 t (x, t) The state constraint is of order p if h i u(x, u, t) = 0 for 0 i p 1, h p u(x, u, t) 0

Direct Adjoining approach, Hamiltonian Hamiltonian and Lagrangian (P-form): H(x, u, λ 0, λ, t) = λ 0 F (x, u, t) + λf(x, u, t) L(x, u, λ 0, λ, µ, ν, t) = H(x, u, λ 0, λ, t) + µg(x, u, t) + νh(x, u, t) Control region: Ω(x, t) = {u E m g(x, u, t) 0}

Direct Adjoining approach, Theorem Part 1 {x (.), u (.)} an optimal pair for the problem on slide 2 over [0 T ] such that u (.)is right-continuous with left-hand limits constraint qualification holds for every tripple {t, x (t), u}, t [0 T ], u Ω(x (t), u) Assume x (t) has only finitely many junction times Then there exist a constant λ 0 0 a piecewise absolutely continuous costate trajectory λ(.) mapping [0, T ] into E n piecewise continuous multiplyer functions µ(.) and ν(.) mapping [0, T ] into E s and E q, respectively a vector η(τ i ) E q for each point τ i of discontinuity of λ(.) α E l and β E l, γ E q, not all zero, such that the following conditions hold almost everywhere:

Direct Adjoining approach, Theorem Part 2 Hamiltonian Maximization u (t) = arg max H(x (t), u, λ 0, λ(t), t) u Ω(x (t),t) L u [t] = H u[t] + µg u[t] = 0 λ(t) = L x [t] and µ(t) 0, µ(t)g [t] = 0 ν(t) 0, ν(t)h [t] = 0 dh [t] /dt = dl [t] /dt = L t [t] L [t]/ t,

Direct Adjoining approach, Theorem Part 3 At the terminal time T, the transversality conditions hold. λ(t ) = λ 0 S x [T ] + αa x [T ] + βb x [T ] + γh x [T ] α 0, γ 0, αa [T ] = γh [T ] = 0 For any time τ in the boundary interval and for any contact time τ, the costate trajectory may have a discontinuouty given by the following conditions. λ(τ ) = λ(τ + ) + η(τ)h x [τ] H [ τ ] = H [ τ +] η(τ)h t [τ] η(τ) 0, η(τ)h [τ] = 0

Indirect Adjoining approach, Hamiltonian Hamiltonian and Lagrangian (P-form): H(x, u, λ 0, λ, t) = λ 0 F (x, u, t) + λf(x, u, t) L(x, u, λ 0, λ, µ, ν, t) = H(x, u, λ 0, λ, t) + µg(x, u, t) + νh 1 (x, u, t) Control region: Ω(x, t) = { } u E m g(x, u, t) 0, h 1 (x, u, t) 0 if h(x, t) = 0

Indirect Adjoining approach, Theorem Part 1 {x (.), u (.)} an optimal pair for the problem on slide 2 such that x (.) has only finitely many junction times strong constraint qualification holds Then there exist a constant λ 0 0 a piecewise absolutely continuous xostate trajectory λ mapping [0, T ] into E n piecewise continuous multiplyer functions µ(.) and ν(.) mapping [0, T ] into E s and E q, respectively a vector η(τ i ) E q for each point τ i of discontinuity of λ(.) α E l and β E l, not all zero, such that the following conditions hold almost everywhere:

Indirect Adjoining approach, Theorem Part 2 Hamiltonian Maximization u (t) = arg max H(x (t), u, λ 0, λ(t), t) u Ω(x (t),t) L u [t] = 0 λ(t) = L x [t] µ(t) 0, µ(t)g [t] = 0 ν i is nondecreasing on boundary intervals of h i, i = 1, 2,, q with ν(t) 0, ν(t) 0, ν(t)h [t] = 0 and dh [t] /dt = dl [t] /dt = L t [t], whenever these derivatives exist.

Indirect Adjoining approach, Theorem Part 3 At the terminal time T, the transversality conditions λ(t ) = λ 0 S x [T ] + αa x [T ] + βb x [T ] + γh x [T ] α 0, γ 0, αa [T ] = γh [T ] = 0 hold. At each entry or contace time, the costate trajectory λ may have a discontinuouty of the form: λ(τ ) = λ(τ + ) + η(τ)h x [τ] H [ τ ] = H [ τ +] η(τ)h t [τ] η(τ) 0, η(τ)h [τ] = 0 and η(τ 1 ) ν(τ + 1 ) for entry times τ 1.

Example: Double Integrator T minimize x T Rx + u T Qu dt 0 ( ) 0 1 subject to ẋ(t) = x(t) + 0 0 u(t) 1 ( ) 0 u(t) 1 x 2 (t) c ( x(0) = x i T 0) x(1) = 0 2 1

Numerical Solution No of discretization points: 100 ( ) 1 0 Q = 0.25, R = 0 2 ( ) 0.17 c = 0.24, x I = 0 xv u 0.2 0.1 0 0.1 0.2 0.3 0.4 0 0.2 0.4 0.6 0.8 1 t (sec) 1 0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 t (sec)

Direct Approach: Parameters, Hamiltonian and Lagrangian F (x(t), u(t), t) = x T Rx + u T Qu S(T ) = 0 f 1 (x(t), u(t), t) = x 2 (t) f 2 (x(t), u(t), t) = u(t) H = λ 0 (r 1 x 1 (t) 2 + r 2 x 2 (t) 2 + qu(t) 2 ) + λ 1 (t)x 2 (t) + λ 2 (t)u(t) L = H + µ 1 (t)(1 u(t)) + µ 2 (t)(1 + u(t)) + ν 1 (t)(c x 2 (t)) + ν 2 (t)(c + x 2 (t))

Direct Approach: Constraints h(x(t)) = ( ) c x2 (t) g(u(t)) = c + x 2 (t) ( ) 1 u(t) 1 + u(t) b(x(t)) = ( ) x1 (t) x 2 (t)

Direct Approach: Hamiltonian Maximization Without constraints, the Hamiltonian is maximized for: û(t) = 1 1 2 q λ 2 (t) λ 0 Considering the constraints, the optimal solution is: 1 if λ 2 (t) < 2q u 1 1 λ (t) = 2 (t) 2 q λ 0 if 2q < λ 2 (t) < 2q 1 if λ 2 (t) > 2q Looking at the numerical solution time dependent u : 1 for t < τ 1 1 1 λ 2 (t) u 2 q λ 0 for τ 1 < t < τ 2 (t) = 0 for τ 2 < t < τ 3 1 1 λ 2 (t) 2 q λ 0 for τ 3 < t < τ 4 1 for τ 4 < t

Direct Approach: The Conditions L u[t] = H u[t] + µg u[t] = 0 2qu(t) + λ 2 (t) µ 1 (t) + µ 2 (t) = 0 (1) λ(t) = L x[t] { λ1 (t) = 2λ 0 r 1 x 1 (t) λ 2 (t) = 2λ 0 r 2 x 2 (t) λ 1 (t) + ν 1 (t) ν 2 (t) (2) µ(t) = ( µ 1 (t) µ 2 (t)) T 0 µ(t)g [t] = 0 (µ 1 (t) + µ 2 (t)) (µ 1 (t) µ 2 (t))u (t) = 0 (3) ν(t) = ( ν 1 (t) ν 2 (t)) T 0, ν(t) 0, ν(t)h (t) = 0 (ν 1 (t) + ν 2 (t))c (ν 1 (t) ν 2 (t))x 2 (t) = 0 (4)

Direct Approach: The Conditions dh [t] = dl = L t [t] dt dt 2r 1 x 1 (t)x 2 (t) 2r 2 x 2 (t)u(t) 2qu(t) u(t) + λ 1 (t)u(t) + λ 1 (t)x 2 (t) + λ 2 (t) u(t) + λ 2 (t)u(t) = 0 (5) u(t)(µ 1 (t) µ 2 (t)) + ( µ 1 (t) µ 2 (t))u(t) ( µ 1 (t) + µ 2 (t)) + (ν 1 (t) ν 2 (t))u(t) + ( ν 1 (t) ν 2 (t))x 2 (t) c( ν 1 (t) + ν 2 (t)) = 0 (6)

Direct Approach: The Conditions Transversality conditions: λ(t ) = βb x[t ] + γh x[t ] ) ( ) β1 = I 2 2 + β 2 ( λ1 [1 ] λ 2 [1 ] ( 0 1 0 1 ) ( ) γ1 γ 0, γh[t ] = 0 (γ 1 + γ 2 )c (γ 1 γ 2 )x 2 (T ) = 0 γ 2 (7) Since x 2 (T ) = 0, we conclude γ 1 = γ 2 = 0. (8)

Direct Approach: u (t) = 1, t < τ 1 Assuming that x i (0) > 0 x 1 (t) = 1 2 t2 + x i x 2 (t) = t (9) From (1) follows: 2q + λ 2 (t) µ 1 (t) + µ 2 (t) = 0 (10) From (3) follows: From (4) follows: From (6) follows: µ 1 (t) = 0, µ 2 (t) free (11) ν 1 (t) = t c t + c ν 2(t) (12) 2 µ 1 + (t + c) ν 1 (t c) ν 2 + (ν 1 ν 2 ) = 0 (13)

Direct Approach: u (t) = 1, t < τ 1 From (2) follows: { λ1 (t) = r 1 t 2 + 2r 1 x i λ 2 (t) = 2r 2 t λ 1 (t) + ν 1 (t) ν 2 (t) (14) Combined with (5) follows: ν 1 (t) = ν 2 (t) (15) From (12) and (15) follows: ν 1 (t) = ν 2 (t) = 0 From (14) follows: { λ1 (t) = 1 3 r 1t 3 r 1 x i t K 1 λ 2 (t) = 1 12 r 1t 4 (r 1 x i + r 2 )t 2 (16) + tk 1 + K 2

Direct Approach: u (t) = 0, τ 2 < t < τ 3 x 1 (t) = ct + K 6 x 2 (t) = c (17) From (1) follows: From (3) follows: From (4) follows: λ 2 (t) µ 1 (t) + µ 2 (t) = 0 (18) µ 1 (t) = µ 2 (t) = 0 (19) ν 1 (t) = 0, ν 2 (t), free (20) (6) provides no new information.

Direct Approach: u (t) = 0, τ 2 < t < τ 3 From (2) follows: λ 1 (t) = 2r 1 ct + 2r 1 K 6 (5) gives the same equation for λ 1 (t) λ 2 (t) = λ 1 (t) 2r 2 c ν 2 (t) (21) From (18) and (19) we conclude λ 2 (t) = 0. Considering this, (21) simplifies to: ν 2 (t) = λ 1 (t) 2r 2 c (22) λ 1 (t) = r 1 ct 2 + K 6 t + K 7 (23)

Direct Approach: u (t) = 1, t > τ 4 x 1 (t) = 1 2 t2 + K 9 t + K 8 x 2 (t) = t + K 9 (24) From (1) follows: 2q + λ 2 (t) µ 1 (t) + µ 2 (t) = 0 (25) From (3) follows: µ 1 (t) free, µ 2 (t) = 0 (26) From (4) follows: (c t)ν 1 (t) + (c + t)ν 2 (t) (ν 1 (t) ν 2 (t))k 9 = 0 (27)

Direct Approach: u (t) = 1, t > τ 4 From (6) follows: ν 1 (t) ν 2 (t) + ( ν 1 (t) ν 2 (t))(t c + K 9 ) = 0 (28) From (2) follows: { λ1 (t) = 2r 1 ( 1 2 t2 + K 9 t + K 8 ) λ 2 (t) = 2r 2 (t + K 9 ) λ 1 (t) + ν 1 (t) ν 2 (t) (29) Combined with (5) follows: ν 1 (t) = ν 2 (t) (30) From (27) and (30) follows ν 1 (t) = ν 2 (t) = 0. From (29) follows: λ 1 (t) = 1 3 r 1t 3 + r 1 K 9 t 2 + 2r 1 K 8 t + K 10 λ 2 (t) = 1 12 r 1t 4 1 3 r 1K 9 t 3 + (r 2 r 1 K 8 )t 2 (31) +(2r 2 K 9 K 10 )t + K 11

Direct Approach: u (t) = 1 2q λ 2(t), τ 1 < t < τ 2, τ 3 < t < τ 4 From (1) follows: From (3) follows: u(t) = λ 2(t) µ 1 (t) + µ 2 (t) 2q (32) µ 1 (t) = µ 2 (t) (33) µ 1 (t) + µ 2 (t) = 0 (34) Therefore, µ 1 (t) = µ 2 (t) = 0. From (4) follows: From (6) follows: c(ν 1 (t) + ν 2 (t)) (ν 1 (t) ν 2 (t))x 2 (t) = 0 (35) (ν 1 (t) ν 2 (t)) λ 2(t) 2q +( ν 1(t) ν 2 (t))x 2 (t) c( ν 1 (t)+ ν 2 (t)) = 0 (36)

Direct Approach: u (t) = 1 2q λ 2(t), τ 1 < t < τ 2, τ 3 < t < τ 4 From (5) follows: ( 2r 2 x 2 (t) + λ 1 (t) + λ ) 2 (t) λ 2 (t) = 0 (37) With λ 2 (t) 0, 2r 2 x 2 (t) + λ 1 (t) + λ 2 (t) = 0. (38) Comparing with (2) and using (4), we conclude that ν 1 (t) = ν 2 (t) = 0. Using (38), system dynamics and u (t) = 1 2q λ 2(t), we can conclude: λ (4) 2 (t) = 1 q (r 2 λ 2 (t) r 1 λ 2 (t)) (39)

Direct Approach: u (t) = 1 2q λ 2(t), τ 1 < t < τ 2, τ 3 < t < τ 4 From this follows: λ 2 (t) = C 1 e σ 1t + C 2 e σ 1t + C 3 e σ 2t + C 4 e σ 2t (40) with r 2 + r2 2 σ 1 = 4qr 1 2q r 2 s r2 2 σ 2 = 4qr 1 2q

Direct Approach: u (t) = 1 2q λ 2(t), τ 1 < t < τ 2, τ 3 < t < τ 4 From system equations: ẋ 2 (t) =u(t) x 2 (t) = 1 ( C 1 e σ1t + C 2 e σ1t C 3 e σ2t + C ) 4 e σ 2t + κ 1 2q σ 1 σ 1 σ 2 σ 2 (41) ẋ 1 (t) =x 2 (t) x 1 (t) = 1 2q ( C1 σ 2 1 + κ 1 t + κ 2 e σ 1t + C 2 σ 2 1 e σ 1t + C 3 σ 2 2 e σ2t + C ) 4 σ2 2 e σ 2t (42)

Direct Approach: u (t) = 1 2q λ 2(t), τ 1 < t < τ 2, τ 3 < t < τ 4 With (38) it follows: ( λ 1 (t) = σ 1 r ) ( 2 ) C 1 e σ1t C 2 e σ 1t σ 1 q ( + σ 2 r ) ( 2 ) C 3 e σ2t C 4 e σ 2t + 2r 2 K 12 (43) σ 2 q

Direct Approach: Initial and Final Conditions Initial condition on x used earlier. Final conditions for x: From system equations when u = 1 follows: Continuity u: u(τ + 1 ) = 1 λ 2(τ + 1 ) = 2q x 1 (1) = 0, x 2 (1) = 0 (44) K 8 = 1 2, K 9 = 1 (45) C 1 e σ 1τ 1 + C 2 e σ 1τ 1 + C 3 e σ 2τ 1 + C 4 e σ 2τ 1 = 2q (46) u(τ 2 ) = 0 λ 2(τ 2 ) = 0 C 1 e σ 1τ 2 + C 2 e σ 1τ 2 + C 3 e σ 2τ 2 + C 4 e σ 2τ 2 = 0 (47)

Direct Approach: Initial and Final Conditions u(τ + 3 ) = 0 λ 2(τ + 3 ) = 0 D 1 e σ 1τ 3 + D 2 e σ 1τ 3 + D 3 e σ 2τ 3 + D 4 e σ 2τ 3 = 0 u(τ 4 ) = 1 λ 2(τ 4 ) = 2q Continuity of x: x 1 (τ 1 ) = x 1(τ + 1 ) 1 2 τ 2 1 + x i = 1 2q D 1 e σ 1τ 4 + D 2 e σ 1τ 4 + D 3 e σ 2τ 4 + D 4 e σ 2τ 4 = 2q ( C1 σ 2 1 + K 12 τ 1 + K 13 e σ 1τ 1 + C 2 σ 2 1 e σ 1τ 1 + C 3 σ 2 2 e σ 2τ 1 + C ) 4 σ2 2 e σ 2τ 1

Direct Approach: Initial and Final Conditions x 2 (τ1 ) = x 2(τ 1 + ) τ 1 = 1 ( C 1 e σ 1τ 1 + C 2 e σ 1τ 1 C 3 e σ 2τ 1 + C ) 4 e σ 2τ 1 + K 12 2q σ 1 σ 1 σ 2 σ 2 x 1 (τ 2 ) = x 1(τ + 2 ) 1 2q ( C1 σ 2 1 = cτ 2 + K 6 e σ 1τ 2 + C 2 σ 2 1 e σ 1τ 2 + C 3 σ 2 2 e σ 2τ 2 + C ) 4 σ2 2 e σ 2τ 2 + K 12 τ 1 + K 13 x 2 (τ2 ) = x 2(τ 2 + ( ) 1 C 1 e σ 1τ 2 + C 2 e σ 1τ 2 C 3 e σ 2τ 2 + C ) 4 e σ 2τ 2 + K 12 = c 2q σ 1 σ 1 σ 2 σ 2

Direct Approach: Initial and Final Conditions x 1 (τ 3 ) = x 1(τ + 3 ) cτ 3 + K 6 = 1 2q + K 14 τ 3 + K 15 ( D1 σ 2 1 e σ 1τ 3 + D 2 σ 2 1 e σ 1τ 3 + D 3 σ 2 2 e σ 2τ 3 + D ) 4 σ2 2 e σ 2τ 3 x 2 (τ3 ) = x 2(τ 3 + ) c = 1 ( D 1 e σ 1τ 3 + D 2 e σ 1τ 3 D 3 e σ 2τ 3 + D ) 4 e σ 2τ 3 + K 14 2q σ 1 σ 1 σ 2 σ 2

Direct Approach: Initial and Final Conditions x 1 (τ 4 ) = x 1(τ + 4 ) 1 2q ( D1 σ 2 1 e σ 1τ 4 + D 2 σ 2 1 = 1 2 τ 2 4 + 1 2 τ 4 e σ 1τ 4 + D 3 σ 2 2 e σ 2τ 4 + D ) 4 σ2 2 e σ 2τ 4 + K 14 τ 4 + K 15 x 2 (τ4 ) = x 2(τ 4 + ( ) 1 D 1 e σ 1τ 4 + D 2 e σ 1τ 4 D 3 e σ 2τ 4 + D ) 4 e σ 2τ 4 + K 14 = τ 4 1 2q σ 1 σ 1 σ 2 σ 2

Direct Approach: Continuity of λ λ 1 (τ1 ) = λ 1(τ 1 + ) 1 ( 3 r 1τ1 3 r 1 x i τ 1 K 1 = σ 1 r ) 2 (C1 e σ 1τ 1 C 2 e σ 1τ 1 ) σ 1 q ( + σ 2 r ) 2 (C3 e σ 2τ 1 C 4 e σ 2τ 1 ) + 2r2 K 12 σ 2 q λ 2 (τ 1 ) = λ 2(τ + 1 ) 1 12 r 1τ 4 1 (r 1 x i + r 2 )τ 2 1 + τ 1 K 1 + K 2 = C 1 e σ 1τ 1 + C 2 e σ 1τ 1 + C 3 e σ 2τ 1 + C 4 e σ 2τ 1

Direct Approach: Continuity of λ λ 1 (τ2 ) = λ 1(τ 2 + ( ) σ 1 r ) 2 (C1 e σ 1τ 2 C 2 e σ 1τ 2 ) σ 1 q ( + σ 2 r ) 2 (C3 e σ 2τ 2 C 4 e σ 2τ 2 ) + 2r2 K 12 σ 2 q = r 1 cτ2 2 + K 6 τ 2 + K 7 λ 2 (τ 2 ) = λ 2(τ + 2 ) C 1 e σ 1τ 2 + C 2 e σ 1τ 2 + C 3 e σ 2τ 2 + C 4 e σ 2τ 2 = 0

Direct Approach: Continuity of λ λ 1 (τ3 ) = λ 1(τ 3 + ) ( r 1 cτ3 2 + K 6 τ 3 + K 7 = σ 1 r ) 2 (D1 e σ 1τ 3 D 2 e σ 1τ 3 ) σ 1 q ( + σ 2 r ) 2 (D3 e σ 2τ 3 D 4 e σ 2τ 3 ) + 2r2 K 14 σ 2 q λ 2 (τ 3 ) = λ 2(τ + 3 ) 0 = D 1 e σ 1τ 2 + D 2 e σ 1τ 3 + D 3 e σ 2τ 3 + D 4 e σ 2τ 3

Direct Approach: Continuity of λ λ 1 (τ4 ) = λ 1(τ 4 + ( ) σ 1 r ) 2 (D1 e σ 1τ 4 D 2 e σ 1τ 4 ) σ 1 q ( + σ 2 r ) 2 (D3 e σ 2τ 4 D 4 e σ 2τ 4 ) σ 2 q + 2r 2 K 14 = 1 3 r 1τ 3 4 r 1 τ 2 4 + K 10 + r 1 τ 4 λ 2 (τ 4 ) = λ 2(τ + 4 ) D 1 e σ 1τ 2 + D 2 e σ 1τ 4 + D 3 e σ 2τ 4 + D 4 e σ 2τ 4 = 1 12 r 1τ 4 4 + 1 3 r 1τ 3 4 + (r 2 1 2 r 1)τ 2 4 + ( 2r 2 K 10 )τ 4 + K 11

Direct Approach: Results These conditions result in 24 unkowns and 20 equations, most of them nonlinear. To determine a solution, we need four more equations. Hence, the solution is calculated by measuring values at the following four points: u(0.2) = 0.5840, x 2 (0.2) = 0.1724 u(0.8) = 0.5841, x 2 (0.8) = 0.1692

Direct Approach: Results x 1 0.15 0.1 0.05 0 0.05 0 0.2 0.4 0.6 0.8 1 t (sec) 1 0.5 u 0 0.5 1 0 0.2 0.4 0.6 0.8 1 t (sec)

Conclusion Since H is concave, we can conclude that the solution to the example found by the direct approach is optimal, according to the Mangasarian-Type sufficient condition. The necessary condition by itself does not provide enough information to calculate the solution for the example.

References This work is based on the following references: Richard F. Hartl, Suresh P. Sethi and Raymond G. Vickson, "A Survey of the Maximum Principles for Optimal Control Problems with State Constraints," SIAM Review, Vol. 37, No. 2 (Jun., 1995), pp. 181-218 Seierstad, Atle, and Knut Sydsæter. Optimal Control Theory with Economic Applications. Amsterdam: North-Holland, 1987.