Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints

Similar documents
Theory and Applications of Optimal Control Problems with Time-Delays

Tutorial on Control and State Constrained Optimal Control Problems

Theory and Applications of Constrained Optimal Control Proble

Theory and Applications of Optimal Control Problems with Tim

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

Minimization with Equality Constraints

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67

Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC)

Principles of Optimal Control Spring 2008

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Minimization with Equality Constraints!

Deterministic Dynamic Programming

Deterministic Optimal Control

Advanced Mechatronics Engineering

Robotics. Control Theory. Marc Toussaint U Stuttgart

Optimisation and optimal control 1: Tutorial solutions (calculus of variations) t 3

A Very Brief Introduction to Conservation Laws

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/53

Optimal Control of a Global Model of Climate Change with Adaptation and Mitigation

Alberto Bressan. Department of Mathematics, Penn State University

Deterministic Optimal Control

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

7 OPTIMAL CONTROL 7.1 EXERCISE 1. Solve the following optimal control problem. max. (x u 2 )dt x = u x(0) = Solution with the first variation

Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers

Optimal control theory with applications to resource and environmental economics

LMI Methods in Optimal and Robust Control

Second Order Analysis of Control-affine Problems with Control and State Constraints

On continuous time contract theory

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Reflected Brownian Motion

INVERSION IN INDIRECT OPTIMAL CONTROL

Chapter Introduction. A. Bensoussan

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

Semiconcavity and optimal control: an intrinsic approach

Optimal Control Theory - Module 3 - Maximum Principle

The Euler Method for Linear Control Systems Revisited

Production and Relative Consumption

Applications of Bang-Bang and Singular Control Problems in B. Problems in Biology and Biomedicine

Barrier Method. Javier Peña Convex Optimization /36-725

PERIODIC DYNAMICS IN A MODEL OF IMMUNE SYSTEM

Integrodifferential Hyperbolic Equations and its Application for 2-D Rotational Fluid Flows

Closed-Loop Impulse Control of Oscillating Systems

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints

LECTURE NOTES IN CALCULUS OF VARIATIONS AND OPTIMAL CONTROL MSc in Systems and Control. Dr George Halikias

Problem List MATH 5173 Spring, 2014

Control Systems. Internal Stability - LTI systems. L. Lanari

Formula Sheet for Optimal Control

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

A Gauss Lobatto quadrature method for solving optimal control problems

Solution via Laplace transform and matrix exponential

Sufficiency for Essentially Bounded Controls Which Do Not Satisfy the Strengthened Legendre-Clebsch Condition

Static and Dynamic Optimization (42111)

Are thermodynamical systems port-hamiltonian?

minimize x subject to (x 2)(x 4) u,

New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows

On some weighted fractional porous media equations

Variational approach to mean field games with density constraints

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

The Multivariate Normal Distribution. In this case according to our theorem

ELEMENTS OF PROBABILITY THEORY

Long-tem policy-making, Lecture 5

Numerical Optimal Control Overview. Moritz Diehl

Mathematical Economics. Lecture Notes (in extracts)

Problem 1 (Equations with the dependent variable missing) By means of the substitutions. v = dy dt, dv

Direct optimization methods for solving a complex state constrained optimal control problem in microeconomics

CHAPTER 9 MAINTENANCE AND REPLACEMENT. Chapter9 p. 1/66

Dynamic Optimal Control!

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

arxiv: v1 [math.oc] 1 Jan 2019

Exponential families also behave nicely under conditioning. Specifically, suppose we write η = (η 1, η 2 ) R k R p k so that

Convergence Rate of Nonlinear Switched Systems

The Growth Model in Continuous Time (Ramsey Model)

Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

Notes: Outline. Shock formation. Notes: Notes: Shocks in traffic flow

Calculus of Variations

Optimal Control of Differential Equations with Pure State Constraints

Lucio Demeio Dipartimento di Ingegneria Industriale e delle Scienze Matematiche

Feynman s path integral approach to quantum physics and its relativistic generalization

Energy Balance Climate Models and the Spatial Structure of Optimal Mitigation Policies

Generalized Cost-Benefit-Analysis and Social Discounting with Intertemporally Dependent Preferences

Some topics in sub-riemannian geometry

10 Transfer Matrix Models

LECTURES ON OPTIMAL CONTROL THEORY

Strong and Weak Augmentability in Calculus of Variations

Computing High Frequency Waves By the Level Set Method

Lecture 3: Growth Model, Dynamic Optimization in Continuous Time (Hamiltonians)

Pontryagin s Minimum Principle 1

Solution by the Maximum Principle

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

Ordinary differential equations with measurable right-hand side and parameters in metric spaces

M403(2012) Solutions Problem Set 1 (c) 2012, Philip D. Loewen. = ( 1 λ) 2 k, k 1 λ

Principles of Optimal Control Spring 2008

OPTIMAL CONTROL SYSTEMS

Optimal Linear Feedback Control for Incompressible Fluid Flow

Economic Growth: Lecture 9, Neoclassical Endogenous Growth

1 Brownian Local Time

Transcription:

Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer School Imperial College, London, September 5, 20

Outline Theory of Optimal Control Problems with Pure State Constraints 2 Academic Example: order q = of the state constraint 3 Van der Pol Oscillator : order q = of the state constraint 4 Example: Immune Response 5 Example: Optimal Control of a Model of Climate Change

Optimal Control Problem with Pure State Constraints State x(t) R n, Control u(t) R (for simplicity). All functions are assumed to be sufficiently smooth. Dynamics and Boundary Conditions ẋ(t) = f (x(t), u(t)), t [0, t f ], x(0) = x 0 R n, ψ(x(t f )) = 0 R k, (0 = ϕ(x(0), x(t f )), mixed boundary conditions) Pure State Constraints and Control Bounds Minimize s(x(t)) 0, t [0, t f ], ( s : R n R ) α u(t) β, t [0, t f ]. tf J(u, x) = g(x(t f )) + f 0 (x(t), u(t)) dt 0

Hamiltonian and Minimum Principle Hamiltonian H(x, λ, u) = λ 0 f (x, u) + λ f (x, u) λ R n (row vector) Let (u, x) L ([0, t f ], R) W, ([0, t f ], R n ) be a locally optimal pair of functions. Then there exist an adjoint (costate) function λ W, ([0, t f ], R n ) and a scalar λ 0 0, a multiplier function of bounded variation µ BV ([0, t f ], R), a multiplier ρ R associated to the boundary condition ψ(x(t f )) = 0, that satisfy the following conditions for a.a. t [0, t f ], where the argument (t) denotes evaluations along the trajectory (x(t), u(t), λ(t)) :

Minimum Principle of Pontraygin et al. (Hestenes) (i) Adjoint integral equation and transversality condition: λ(t) = t f t t f H x (s) ds + s x (x(s)) dµ(s) t + (λ 0 g + ρψ) x (x(t f )) ( if s(x(t f )) < 0 ), (iia) Minimum Condition for Hamiltonian: H(x(t), λ(t), u(t)) = min { H(x(t), λ(t), u) α u β } (iii) positive measure dµ and complementarity condition: dµ(t) 0 and t f 0 s x (t))dµ(t) = 0

Order of a state constraint s(x(t)) 0 Define recursively functions s (k) (x, u) by s (0) (x, u) = s(x), s (k+) (x, u) = s(k) x (x, u) f (x, u), (k = 0,,..) Suppose there exist q N with s (k) u (x, u) 0, i.e., s(k) = s (k) (x), (k = 0,,.., q ), s (q) u (x, u) 0. Then along a solution of ẋ(t) = f (x(t), u(t)) we have s (k) (x(t)) = dk dt k s(x(t)) (k = 0,,.., q ), s (q (x(t), u(t)) = dq dt q s(x(t))

Regularity conditions to ensure dµ(t) = η(t)dt Regularity assumption on a boundary arc s(x(t)) = 0, t t t 2. s (q) u (x(t), u(t)) 0 t t t 2. Assumption on boundary control There exists a sufficiently smooth boundary control u = u b (x)) with s(x, u b (x)) 0. Assumption: α < u(t) = u b (x(t)) < β t < t < t 2. Regularity of multiplier (measure) µ The regularity and assumption on boundary control imply that there exist a smooth multiplier η(t) with dµ(t) = η(t) dt t < t < t 2

Minimum Principle under regularity Augmented Hamiltonian H(x, λ, η, u) = H(x, λ, u) + η s(x) Adjoint equation and jump conditions λ(t) = H x (t) = H x (t) η(t)s x (x(t)) λ(t k +) = λ(t k ) ν k s x (x(t k )), ν k 0 at each contact or junction time t k, ν k = µ(t k +) µ(t k ) Minimum condition H(x(t), λ(t), u(t)) = min { H(x(t), λ(t), u) α u β } H u (t) = 0 on boundary arcs t + < t < t 2

Academic Example Minimize J(x, u) = 2 0 ( u 2 + x 2 ) dt subject to ẋ = x 2 u, x(0) =, x(2) =, and the state constraint x(t) a, 0 t 2. 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 state trajectories x(t) "x.dat" "x-a=0.6.dat" "x-a=0.7.dat" "x-a=0.85.dat" 0.5 0 0.5.5 2 Boundary arc x(t) a = 0.7 for t = 0.64 t t 2 =.386.

Academic Example: state and control trajectories 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 state trajectories x(t) "x.dat" "x-a=0.6.dat" "x-a=0.7.dat" "x-a=0.85.dat" 0.5 0 0.5.5 2 2.5 2.5 control u "u.dat" "u-a=0.6.dat" "u-a=0.7.dat" "u-a=0.85.dat" 0.5 0-0.5 0 0.2 0.4 0.6 0.8.2.4.6.8 2

Academic Example: Computation of Multiplier η State constraint: s(x) = a x 0. Order of the state constraint is q =, since s () (x, u) = ẋ = x 2 + u, (s () ) u =. Boundary control u = u b (x) with s () (x, u b (x)) 0 is given by u b (x) = x 2 = a 2. Augmented Hamiltonian and adjoint equation: H(x, λ, η, u) = u 2 + x 2 + λ(x 2 u) + η(a x), λ = H x = 2x 2λx + η. The minimum condition 0 = H u = 2u λ gives u = λ/2. Since u = a 2 holds on a boundary arc, we get λ = 0 and hence the multiplier η on the boundary η(t) 2a( + 2a 2 ) > 0.

Academic Example: state and control trajectories 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 state trajectories x(t) "x.dat" "x-a=0.6.dat" "x-a=0.7.dat" "x-a=0.85.dat" 0.5 0 0.5.5 2 5 multiplier eta 4 3 2 0 0 0.2 0.4 0.6 0.8.2.4.6.8 2

Van der Pol Oscillator t f Minimize J(x, u) = ( u 2 + x 2 + x 2 2) dt (t f = 4), 0 subject to ẋ = x 2, ẋ 2 = x + x 2 ( x 2 ) + u, x (0) = x 2 (0) =, x (t f ) = x 2 (t f ) = 0, state constraint x 2 (t) a 0 t t f. 0.8 0.6 0.4 0.2 0-0.2-0.4-0.6 state trajectories x 2 "x2.dat" "x2-a=-0.5.dat" "x2-a=-0.4.dat" "x2-a=-0.3.dat" -0.8 0 0.5.5 2 2.5 3 3.5 4 Boundary arc x 2 (t) a = 0.4 for t = 0.887 t t 2 = 2.62.

Van der Pol : Computation of Multiplier η Order of the state constraint is q =, since s(x) = a x and s () (x, u) = ẋ 2 = x + x 2 (x 2 ) u, (s () ) u = 0. Boundary control u = u b (x) with s () (x, u b (x)) 0 is given by u b (x) = x + x 2 (x 2 ) = x + a(x 2 ). Augmented Hamiltonian and adjoint equation: H(x, λ, η, u) = u 2 + x 2 + x 2 2 + λ x 2 + λ 2 ( x + x 2 ( x 2) + u) +η(a x), λ = λ 2 = H x = 2x + λ 2 ( + 2x x 2 ), H x2 = 2x 2 λ λ 2 ( x 2) + η. The minimum condition 0 = H u = 2u + λ 2 gives u = λ 2 /2. Hence, x a + ax 2 = λ 2/2 holds on the boundary. Differentiation yields η = η(x, λ) = 4a 2 x + λ + λ 2 ( x 2 ).

Van der Pol Oscillator : optimal control 0.8 0.6 0.4 0.2 0-0.2-0.4-0.6 state trajectories x 2 "x2.dat" "x2-a=-0.5.dat" "x2-a=-0.4.dat" "x2-a=-0.3.dat" -0.8 0 0.5.5 2 2.5 3 3.5 4 control u 3 2 0 - -2-3 -4-5 -6 0 0.5.5 2 2.5 3 3.5 4 Boundary arc x 2 (t) a = 0.4 for t = 0.887 t t 2 = 2.62.

Van der Pol Oscillator : multiplier η 0.8 0.6 0.4 0.2 0-0.2-0.4-0.6 state trajectories x 2 "x2.dat" "x2-a=-0.5.dat" "x2-a=-0.4.dat" "x2-a=-0.3.dat" -0.8 0 0.5.5 2 2.5 3 3.5 4 multiplier eta 6 4 2 0 8 6 4 2 0 0 0.5.5 2 2.5 3 3.5 4 Boundary arc x 2 (t) a = 0.4 for t = 0.887 t t 2 = 2.62.

CASE I : Hamiltonian is regular CASE I : control u appears nonlinearly and U = R Assume that the Hamiltonian is regular, i.e., H(x, λ, u) admits a unique minimum with respect to u, the strict Legendre condition H uu (t) > 0 holds. Let q be the order of the state constraint s(x) 0 and consider a boundary arc with s(x(t)) = 0 t t t 2. Junction conditions q = : The control u(t) and the adjoint variable λ(t) are continuous at t k, k =, 2. q = 2 : The control u(t) is continuous but the adjoint variable may have jumps according to λ(t k +) = λ(t k ) ν k s x (t k )). q 3 and q odd: If the control u(t) is piecewise analytic then there are no boundary arcs but only contact points.

Case II : control u appears linearly Dynamics and Boundary Conditions ẋ(t) = f (x(t)) + f 2 (x(t)) u(t), a.e. t [0, t f ], x(0) = x 0 R n, ψ(x(t f )) = 0 R k, Control and State Constraints α u(t) β s(x(t) 0 t [0, t f ]. Minimize J(u, x) = g(x(t f )) + tf 0 ( f 0 (x(t)) + f 02 (x(t) ) u(t) dt

Case II : Hamiltonian and switching function Normal Hamiltonian H(x, λ, u) = f 0 (x) + λf (x) + [ f 02 (x) + λf 2 (x) ] u. Augmented Hamiltonian H(x, λ, µ, u) = H(x, λ, u) + µ s(x)). Switching function σ(x, λ) = H u (x, λ, u) = f 02 (x) + λf 2 (x), σ(t) = σ(x(t), λ(t)). On a boundary arc we have s(x(t)) = 0, t t t 2, α < u(t) < β, t < t < t 2. The minimum condition for the control implies 0 = H u (x(t), λ(t), u(t)) = σ(t), t + t t 2. Formally, the boundary control behaves like a singular control.

Case II : Boundary Control and Junction Theorem Let q be the order of the state constraint: s (q) (x, u) = d q dt q s(x) = s (x) + s 2 (x) u. The boundary control u = u b (x) is determined from s (q) (x, u) = 0 as u = u b (x) = s (x)/s 2 (x). Junction Theorem for q = Let q = and let a bang-bang arc be joined with a boundary arc at t (0, T ). Claim: If the control is discontinuous at t, then the adjoint variable is continouus at t.

Model of the immune response Dynamic model of the immune response: Asachenko A, Marchuk G, Mohler R, Zuev S, Disease Dynamics, Birkhäuser, Boston, 994. Optimal control: Stengel RF, Ghigliazza R, Kulkarni N, Laplace O, Optimal control of innate immune response, Optimal Control Applications and Methods 23, 9 04 (2002), Lisa Poppe, Julia Meskauskas: Diploma theses, Universität Münster (2006,2008).

Innate Immune Response: state and control variables State variables: x (t) : concentration of pathogen (=concentration of associated antigen) x 2 (t) : concentration of plasma cells, which are carriers and producers of antibodies x 3 (t) : concentration of antibodies, which kill the pathogen (=concentration of immunoglobulins) x 4 (t) : relative characteristic of a damaged organ ( 0 = healthy, = dead ) Control variables: u (t) : pathogen killer u 2 (t) : plasma cell enhancer u 3 (t) : antibody enhancer u 4 (t) : organ healing factor

Generic dynamical model of the immune response ẋ (t) = ( x 3 (t))x (t) u (t), ẋ 2 (t) = 3A(x 4 (t))x (t d)x 3 (t d) (x 2 (t) 2) + u 2 (t), ẋ 3 (t) = x 2 (t) (.5 + 0.5x (t))x 3 (t) + u 3 (t), ẋ 4 (t) = x (t) x 4 (t) u 4 (t). Immune deficiency function triggered by target organ damage { } cos(πx4 ), 0 x A(x 4 ) = 4 0.5. 0 0.5 x 4 For 0.5 x 4 (t) the production of plasma cells stops. State delay d 0 in variables x and x 3 Initial conditions (d = 0) : x 2 (0) = 2, x 3 (0) = 4/3, x 4 (0) = 0 Case : x (0) =.5, decay, requires no therapy (control) Case 2 : x (0) = 2.0, slower decay, requires no therapy Case 3 : x (0) = 3.0, diverges without control (lethal case)

Optimal control model: cost functional State x = (x, x 2, x 3, x 4 ) R 4, Control u = (u, u 2, u 3, u 4 ) R 4 L 2 -functional quadratic in control: Stengel et al. Minimize J 2 (x, u) = x (t f ) 2 + x 4 (t f ) 2 t f + ( x 2 + x 4 2 + u2 + u2 2 + u2 3 + u2 4 ) dt 0 L -functional linear in control Minimize J (x, u) = x (t f ) 2 + x 4 (t f ) 2 t f + ( x 2 + x 4 2 + u + u 2 + u 3 + u 4 ) dt 0 Control constraints: 0 u i (t) u max, i =,.., 4 Final time: t f = 0

L 2 functional, d = 0 : optimal state and control variables State variables x, x 2, x 3, x 4 and optimal controls u, u 2, u 3, u 4 : second-order sufficient conditions via matrix Riccati equation

L 2 functional, d = 0 : state constraint x 4 (t) 0.2 State and control variables for state constraint x 4 (t) 0.2. Boundary arc x 4 (t) 0.2 for t = 0.398 t t 2 =.35

L 2 functional, multiplier η(t) for constraint x 4 (t) 0.2 Compute multiplier η as function of (x, λ) : η(x, λ) = λ 2 3π sin(πx 4 )x x 3 λ + 2λ 4 2x 3 x + 2x 4 Scaled multiplier 0. η(t) and boundary arc x 4 (t) = 0.2

L 2 functional, delay d > 0, constraint x 4 (t) α Dynamics with state delay d > 0 ẋ (t) = ( x 3 (t))x (t) u (t), ẋ 2 (t) = 3 cos(πx 4 ) x (t d)x 3 (t d) (x 2 (t) 2) + u 2 (t), ẋ 3 (t) = x 2 (t) (.5 + 0.5x (t))x 3 (t) + u 3 (t), ẋ 4 (t) = x (t) x 4 (t) u 4 (t) x 4 (t) α 0.5 Initial conditions x (t) = 0, d t < 0, x (0) = 3, x 3 (t) = 4/3, d t 0, x 2 (0) = 2, x 4 (0) = 0.

L 2 functional : delay d = and x 4 (t) 0.2 State variables for d = 0 and d =

L 2 functional : delay d = and x 4 (t) 0.2 Optimal controls for d = 0 and d =

L 2 functional, d = : multiplier η(t) for x 4 (t) 0.2 Compute multiplier η as function of (x, λ) : η(x, y, λ) = λ 2 3π sin(πx 4 )y y 3 λ + 2λ 4 2x 3 x + 2x 4 Scaled multiplier 0. η(t) and boundary arc x 4 (t) = 0.2 ; η(t) is discontinuous at t = d =

L functional : no delays Minimize J (x, u) = x (t f ) 2 + x 4 (t f ) 2 t f + ( x 2 + x 4 2 + u + u 2 + u 3 + u 4 ) dt 0 Dynamics with delay d and control constraints ẋ (t) = ( x 3 (t))x (t) u (t), ẋ 2 (t) = 3A(x 4 (t))x (t)x 3 (t) (x 2 (t) 2) + u 2 (t), ẋ 3 (t) = x 2 (t) (.5 + 0.5x (t))x 3 (t) + u 3 (t), ẋ 4 (t) = x (t) x 4 (t) u 4 (t), 0 u i (t) u max, 0 t t f (i =,.., 4)

L functional : u max = 2

L functional : non-delayed, time optimal control for x (t f ) = x 4 (t f ) = 0, x 3 (t f ) = 4/3 u max = : minimal time t f = 2.25, singular arc for u 4 (t)

Dynamical Model of Climate Change A. Greiner, L. Grüne, and W. Semmler, Growth and Climate Change: Threshhold and Multiple Equilibria, Working Paper, Schwartz Center for Economic Policy Studies, The New School, 2009, to appear. State Variables: K(t) : Capital (per capita) M(t) : CO 2 concentration in the atmosphere T (t) : Temperature (Kelvin) Control Variables C(t) : Consumption A(t) : Abatement per capita

Dynamical Model of Climate Change Production : Y = K 0.8 D(T T o ), T o = 288 (K) Damage : D(T T o ) = (0.025 (T T o ) 2 + ) 0.025 Dynamics of per-capita capital K K = Y C A (δ + n)k, K(0) = K 0. ( δ = 0.075, n = 0.03 ) Emission : E = 3.5 0 4 K/A Dynamics of CO 2 concentration M Ṁ = 0.49 E 0. M, M(0) = M 0.

Dynamical Model of Climate Change (continued) Albedo (non-reflected energy) at temperature T (Kelvin): α (T ) = ( ) 2 k π(t 293) + k π 2 2, k = 5.6 0 3, k 2 = 0.795. Radiative forcing : 5.35 ln (M). Outgoing radiative flux (Stefan-Boltzman-law) : ɛ σ T T 4. Parameters : ɛ = 0.95, σ T = 5.67 0 8, cth = 0.49707, Q = 367. Dynamics of temperature T with delay d 0 Ṫ (t) = cth [ ( α (T (t)) Q 4 9 6 ɛ σ T T (t) 4 +5.35 ln (M(t d)) ], T (0) = T 0.

Optimal Control Model of Climate Change Control constraints for 0 t t f = 200 : 0 < C(t) C max =, 7 0 4 A(t) 3 0 3 State constraints of order 2 and 3 : M(t) M max, 0 t t f = 200, T (t) T max, t e = 20 t t f = 200. Maximize consumption t f J(K, M, T, C, A) = e (n ρ)t ln (C(t)) dt 0

Stationary points of the canonical system Constant Abatement A =.2 0 3 : 3 stationary points T s := 29.607, M s = 2.0596, K s =.44720, T s := 294.258, M s =.90954, K s =.34726, T s := 294.969, M s = 2.07792, K s =.46606. Control variable abatement A(t) : Social Optimum T s := 288.286, M s =.28500, K s =.79647.

T (0) = 29, M(0) = 2.0, K(0) =.4 Concentration M, Emission E Temperature T (Kelvin) 2.5 292.2 2.5 0.5 0 0 50 00 50 200 292 29.8 29.6 29.4 29.2 29 0 50 00 50 200 Consumption C 0.98 0.96 0.94 0.92 0.9 0.88 0 50 00 50 200 Capital K.6.4.2 0.8 0.6 0.4 0.2 0 0 50 00 50 200

T (0) = 29, M(0) = 2.0, K(0) =.4 T (t f ) = 29, M(t f ) =.8, K(t f ) =.4 Concentration M, Emission E 2.2 2.8.6.4.2 0.8 0.6 0.4 0.2 0 50 00 50 200 29.7 29.6 29.5 29.4 29.3 29.2 29. Temperature T (Kelvin) 29 0 50 00 50 200 Consumption C 0.94 0.92 0.9 0.88 0.86 0.84 0.82 0.8 0.78 0.76 0 50 00 50 200 Capital K.5.45.4.35.3.25.2.5. 0 50 00 50 200

T (0) = 29, M(0) = 2.0, K(0) =.4 T (t f ) = 29, M(t f ) =.8, K(t).2, 0.85 C(t) Concentration M, Emission E 2.4 2.2 2.8.6.4.2 0.8 0.6 0.4 0.2 0 50 00 50 200 Temperature T (Kelvin) 292.8 292.6 292.4 292.2 292 29.8 29.6 29.4 29.2 29 0 50 00 50 200 Consumption C 0.94 0.935 0.93 0.925 0.92 0.95 0.9 0.905 0.9 0.895 0.89 0.885 0 50 00 50 200 Capital K.65.6.55.5.45.4.35.3.25 0 50 00 50 200

d = 0 : T (0) = 29, M(0) =.8, K(0) =.4 T (t f ) = 29, M(t f ) =.8, K(t) =.4 Concentration M, Emission E 2.2 2.8.6.4.2 0.8 0.6 0.4 0.2 0 50 00 50 200 Temperature T (Kelvin) 29.8 29.6 29.4 29.2 29 290.8 290.6 290.4 290.2 0 50 00 50 200 0.95 0.9 0.85 Consumption C.5.45.4.35.3 Capital K 0.8.25 0.75 0 50 00 50 200.2 0 50 00 50 200

T (0) = 294.5, M(0) = 2.2, K(0) =.4 T (t f ) = 29, M(t f ) =.8, K(t) =.4, 0.85 C(t) Concentration M, Emission E 2.2 2.8.6.4.2 0.8 0.6 0.4 0.2 0 50 00 50 200 Temperature T (Kelvin) 295 294.5 294 293.5 293 292.5 292 29.5 29 0 50 00 50 200 Consumption C 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0 50 00 50 200 Capital K.6.5.4.3.2. 0.9 0 50 00 50 200

d = 0 : T (0) = 294.5, M(0) = 2.2, K(0) =.4 T (t f ) = 29, K(t) =.4, 0.85 C(t) State constraint of order two : M(t).9 for 30 t t f = 200 Concentration M, Emission E 2.2 2.8.6.4.2 0.8 0.6 0.4 0.2 0 50 00 50 200 Temperature T (Kelvin) 295.5 295 294.5 294 293.5 293 292.5 292 29.5 29 0 50 00 50 200 Consumption C 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0 50 00 50 200 Capital K.4.35.3.25.2.5. 0 50 00 50 200

d = 0 : T (0) = 294.5, M(0) = 2.2, K(0) =.4 T (t f ) = 29, K(t) =.4, 0.85 C(t) State constraint of order two : M(t).9 for 30 t t f = 200 Concentration M, Emission E 2.2 2.8.6.4.2 0.8 0.6 0.4 0.2 0 50 00 50 200 adjoint variable lambda K. 0.9 0.8 0.7 0.6 0.5 0.4 0 20 40 60 80 00 20 40 60 80 200 adjoint variable lambda M -0.05-0. -0.5-0.2-0.25-0.3-0.35-0.4 0 20 40 60 80 0020406080200 adjoint variable lambda T.2 0.8 0.6 0.4 0.2 0-0.2 0 20 40 60 80 00 20 40 60 80 200