Optimal Control. Historical Overview and Theoretical Development. Nicolai Christov
|
|
- Lynette Stone
- 5 years ago
- Views:
Transcription
1 Optimal Control Historical Overview and Theoretical Development Nicolai Christov Laboratoire Franco-Chinoise d Automatique et Signaux Université Lille1 Sciences et Technologies and Nanjing University of Science and Technology Ecole d été CA NTI, Université "Politehnica" de Bucarest mai 2014
2 Short contents 1. The Optimal Control Problem 2. Calculus of Variations 3. Optimal Control without Inequality Constraints 4. Pontryagin Maximum Principle 2/ 41
3 1. The Optimal Control Problem 3/ 41
4 1.1 The brachistochrone problem I, Johann Bernoulli, address the most brilliant mathematicians in the world. Nothing is more attractive to intelligent people than an honest, challenging problem, whose possible solution will bestow fame and remain as a lasting monument. Following the example set by Pascal, Fermat, etc., I hope to gain the gratitude of the whole scientific community by placing before the finest mathematicians of our time a problem which will test their methods and the strength of their intellect. If someone communicates to me the solution of the proposed problem, I shall publicly declare him worthy of praise. Given two points A and B in a vertical plane, what is the curve traced out by a point acted on only by gravity, which starts at A and reaches B in the shortest time? (Acta Eruditorum, juin 1696) 4/ 41
5 J. Bernoulli ( ) Acta Eruditorum Journal Johann Bernoulli made a number of contributions to infinitesimal calculus and educated Leonhard Euler. In the 17th century there were only 4 scientific journals: Journal des savants (1665), Transactions of the Royal Society (1665), Giornale de letterati (1668) and Acta Eruditorum (1682). 5/ 41
6 1.1 The brachistochrone problem The solution of the brachistochrone problem, which is a segment of a cycloid, was found by Newton, Leibnitz, L Hôpital, and the two Bernoulli brothers - Johann and Jacob. The Royal Society published Newton s solution anonymously in the Philosophical Transactions of the Royal Society in January The May 1697 publication of Acta Eruditorum contained Leibnitz s solution to the brachistochrone problem, Johann Bernoulli s solution, Jacob Bernoulli s solution, and a Latin translation of Newton s solution. The solution by de L Hôpital was not published until / 41
7 I. Newton ( ) G.W. Leibnitz ( ) Isaac Newton is considered by many to be the greatest and most influential scientist who ever lived. Gottfried Wilhelm Leibnitz was one of the great thinkers of the seventeenth and eighteenth centuries and is known as the last "universal genius". 7/ 41
8 Jacob Bernoulli ( ) G. de l Hôpital ( ) Jacob Bernoulli made important contributions to probability theory and differential equations. Guillaume de l Hôpital was a student of Johann Bernoulli and is the author of the first textbook on the differential calculus Analyse des infiniment petits pour l intelligence des lignes courbes (1696). 8/ 41
9 1.2 General assumptions Consider continuous-time dynamical systems ẋ(t) = f (x(t), u(t), t) (1) The state and control vectors are subject to constraints: x(t) X, u(t) U (2) The function f is continuous (or at least piecewise continuous) and has partial derivatives in respect to all its arguments The function f satisfies the Liepschitz condition f (x, u, t) f ( x, u, t) < K x x 9/ 41
10 1.3 Statement of the optimal control problem Find the control u(t) which transfers the state x(t) from an initial value x(t 0 ) to a final value x(t f ) and which satisfies the constraint x(t) X, and minimizes a performance index (optimality criterion) or I = tf t 0 F(x(t), u(t), t)dt + φ(x(t 0 ), x(t f ), t 0, t f ) (3) I = tf t 0 F(x(t), u(t), t)dt + φ(x(t f ), t f ) (4) 10/ 41
11 1.4 Types of optimal control problems According to the optimality criterion: time optimal control problem fuel optimal control problem minimization of the state vector deviation from a reference value According to the boundary conditions: fixed or free final state or time According to the state equation: linear / non linear time-invariant or not with or without external perturbations 11/ 41
12 The solution of the optimal control problem depends on whether the bounds for u and x are attained or not: First case: the boundary values of u and/or x are not attained Calculus of variations Second case: the boundary values of u and/or x are attained Pontryagin maximum principle 12/ 41
13 2. Calculus of Variations 13/ 41
14 2.1 Preliminaries: definitions Function: Variable Variable Functional: Function Variable Standard problem: Find x(t) minimizing the functional I = tf t 0 F(x(t), ẋ(t), t)dt, x(t) R n. (5) Thus one looks for x (t) such that I(x) > I(x ) for all x(t) x (t) 14/ 41
15 2.1 Preliminaries: variational approach The standard problem can be reformulated in the following way: Find x(t) such that I(ε) = tf +ɛτ f t 0 +ετ 0 F(x(t) + εη(t), ẋ(t) + ε η(t), t)dt (6) is minimal for ε = 0 whatever the constants τ 0 et τ f and regardless of the function η(t). Necessary condition for minimum: ( ) di(ε) = 0 dε ε=0 15/ 41
16 2.2 Euler equation In the case when the limits t 0, t f, x(t 0 ), x(t f ) are fixed we have ( ) di(ε) tf ( F = dε ε=0 t 0 x η + F ) tf ( F ẋ η dt = t 0 x d ) F η dt dt ẋ L. Euler showed that only if the equation ( ) di(ε) dε ε=0 is zero for each η(t) if and F x d F dt ẋ = 0 (7) is satisfied. This equation is called Euler equation. 16/ 41
17 2.3 Transversality conditions If the limits t 0, t f, x(t 0 ), x(t f ) are not fixed, we have ( ) di(ε) tf ( F = dε ε=0 t 0 x d ) F η dt + F dt ẋ ẋ T η F ẋ T η tf ( + F F ) ( ẋ T ẋ τ f F F ) t f ẋ T ẋ τ 0 t 0 Thus the minimization of I consists of satisfying both the Euler equation (7) and the transversality conditions F = 0, F ( = 0, F F ) ( ẋ tf ẋ t0 ẋ T ẋ = 0, F F ) t f ẋ T ẋ = 0 t 0 (8) t0 17/ 41
18 2.4 Variational problem with equality constraint In this case one seeks the function x(t) that minimizes I = under a constraint tf t 0 F (x(t), ẋ(t), t)dt (9) g(x(t), ẋ(t), t) = 0. (10) J.L. Lagrange showed that the solution of this problem can be obtained by solving the standard problem I = tf t 0 ( ) F(x(t), ẋ(t), t) + λ T (t)g(x(t), ẋ(t), t) dt min. The function λ(t) is called Lagrange multiplier. (11) 18/ 41
19 L. Euler ( ) J.L. Lagrange ( ) L. Euler made enormous contributions to a wide range of mathematics and physics. J.L. Lagrange made important contributions to all fields of analysis, number theory, and classical and celestial mechanics. L. Euler is considered as the equivalent to the doctoral advisor of J.L. Lagrange. 19/ 41
20 3. Optimal Control without Inequality Constraints 20/ 41
21 3.1 General case For the dynamical system ẋ(t) = f (x(t), u(t), t), x(t 0 ) = x 0 (12) one seeks the control u(t) that minimizes the performance index I = tf t 0 F(x(t), u(t), t)dt. (13) Define the function L(x, ẋ, u, λ, t) = F(x, u, t) + λ T (t)(f (x, u, t) ẋ) (14) called function of Lagrange. 21/ 41
22 3.1 General case Thus the problem is to minimize J = tf t 0 L(x, ẋ, u, λ, t)dt. (15) In this case the necessary condition for minimum is L x d L dt ẋ = 0 L x + λ = 0 L λ d L dt λ = 0 = L λ = 0 (16) L u d L dt u = 0 L u = 0 These equations are called Euler-Lagrange equations. 22/ 41
23 3.1 General case If t f and x(t f ) are free, it is necessary to take into account the transversality conditions: x(t f ) free L = 0 ẋ tf t f free ( L L ) ẋ T ẋ = 0 t f 23/ 41
24 3.2 Linear-Quadratic problem For the linear system ẋ(t) = Ax(t) + Bu(t), x(0) = x 0 (17) one seeks the control u(t) that minimizes the quadratic criterion I = 1 2 tf 0 ( ) x T (t)qx(t) + u T (t)ru(t) dt x T (t f )Sx(t f ) (18) where Q and S are symmetric positive semi-definite matrices and R is positive definite. The final time t f is fixed and the final state x(t f ) is free. 24/ 41
25 3.2 Linear-Quadratic problem In this case the Euler-Lagrange equations are λ(t) = Qx(t) A T λ(t) λ(t) = Qx(t) A T λ(t) ẋ(t) = Ax(t) + Bu(t) = ẋ(t) = Ax(t) + Bu(t) (19) Ru(t) + B T λ(t) = 0 u(t) = R 1 B T λ(t) with the transversality condition λ(t f ) = Sx(t f ) (20) 25/ 41
26 3.2 Linear-Quadratic problem In 1960 R.E. Kalman showed that λ(t) = P(t)x(t), P(t f ) = S (21) where P(t) satisfies the differential Riccati equation Ṗ = AT P + PA PBR 1 B T P + Q, P(t f ) = S. (22) It follows that u(t) = K (t)x(t), K (t) = R 1 B T P(t) (23) i.e. we have a feedback optimal control. 26/ 41
27 3.2 Linear-Quadratic problem: infinite horizon In this case one seeks the control u(t) minimizing the criterion ( ) I = x T (t)qx(t) + u T (t)ru(t) dt (24) with x f = 0. 0 R.E. Kalman showed that if the pair (A, B) is controllable and the pair (A, Q) observable, the matrix P in the relation λ(t) = P(t)x(t) is the unique positive definite solution of the algebraic Riccati equation A T P + PA PBR 1 B T P + Q = 0. (25) Thus the optimal control is u(t) = R 1 B T Px(t) = Kx(t). (26) 27/ 41
28 R.E. Kalman (1930) L.S. Pontryagin ( ) In the 1960s, R.E. Kalman was the leader in the development of the modern control theory. His main contributions are the "Kalman filter" and the Linear-Quadratic Regulator. L.S. Pontryagin made important contributions to topology, algebra, and dynamical systems. In 1961 he published The Mathematical Theory of Optimal Processes with his students V.G. Boltyanskii, R.V. Gamkrelidze and E.F. Mishchenko. 28/ 41
29 4. Pontryagin Maximum Principle 29/ 41
30 4.1 Maximum principle formulation Consider now the optimal control problem with inequality constraints u(t) U. In this case the optimal control consists of parts where u(t) is inside of U and parts where u(t) is on the boundary of U. u(t) is included in U : one can use the classical variational methods u(t) is on the boundary of U : one can not use the classical variational methods because variations can not be done outside the boundary. 30/ 41
31 4.1 Maximum principle formulation One seeks the control u(t) U that transfers the system ẋ(t) = f (x(t), u(t)) (27) from the state x 0 to the state x f I = tf To the n components x i x 0 = and minimizes the criterion t 0 F(x(t), u(t))dt. (28) t t 0 of the state vector x the component F(x(τ), u(τ))dτ is added, so that if x = x f, then x 0 (t f ) = I. One has x 0 (t) = F(x(t), u(t)) f 0 (x(t), u(t)) (29) 31/ 41
32 4.1 Maximum principle formulation Introduce the adjoint vector ψ such that ( ) f T n f j (x, u) ψ = ψ ψ i = x x i ψ j, i = 0,..., n and the Hamiltonian H(ψ, x, u) = ψ T f = j=0 (30) n ψ i f i. (31) i=0 Equations (27), (29) and (30) can be rewritten in the Hamilton canonical form dx i dt dψ i dt = H ψ i = H x i i = 0,..., n (32) 32/ 41
33 4.1 Maximum principle formulation Denote M(ψ, x) = sup u U H(ψ, x, u). In 1956 L.S. Pontryagin and his students V.G. Boltyanskii, R.V. Gamkrelidze and E.F. Mishchenko formulated the so-called Pontryagin Maximum Principle : In order for a control u(t) and a trajectory x(t) to be optimal, it is necessary that there exist nonzero continuous vector ψ(t) satisfying the system (32) such that : 1. The Hamiltonian H(ψ(t), x(t), u(t)) attains its maximum for u = u(t) t [t 0, t f ], i.e. H(ψ(t), x(t), u(t)) = M(ψ(t), x(t)) (33) 2. At the terminal time t f, the relations ψ 0 (t f )) 0, M(ψ(t f ), x(t f )) = 0 (34) are satisfied. 33/ 41
34 4.2 Time optimal control In this case one seeks the control u(t) transferring the system ẋ(t) = f (x(t), u(t)) = a(x(t)) + B(x(t))u(t) (35) from the state x 0 to the state x f I = tf and minimizing the criterion t 0 dt = t f t 0 (36) under the constraint U i min u i (t) U i max, i = 1, 2,..., m. The Hamiltonian for this problem is H(ψ(t), x(t), u(t)) = 1 + ψ T (t)a(x(t)) + ψ T (t)b(x(t))u(t). (37) 34/ 41
35 4.2 Time optimal control It is therefore to find the maximum of ψ T (t)b(x(t))u(t) (the only term depending of u ). Decompose B(x(t)) in columns b i (x(t)) : m ψ T (t)b(x(t))u(t) = ψ T (t)b i (x(t))u i (t). i=1 If u i are independent, it is necessary to find the maximum of each term ψ T (t)b i (x(t))u i (t). 35/ 41
36 4.2 Time optimal control For each t one thus finds the value of u i (t) : ψ T (t)b i (x(t)) > 0 u i (t) = U i max ψ T (t)b i (x(t)) < 0 u i (t) = U i min ψ T (t)b i (x(t)) = 0 u i (t) = undetermined If this term does not vanish on some time intervals, the system is called normal (in the opposite case : singular). For singular systems, the maximum principle does not apply. 36/ 41
37 4.2 Time optimal control The optimal control is thus a succession of U i min et U i max (hence the name of this type of control: Bang-Bang) Algorithm for solving the time optimal problem : integrate u i (t) = sign(ψ T (t)b i (x(t))) ẋ(t) = a(x(t)) + B(x(t))u(t) (38) ψ T (t) = ψ T (t) a(x(t)) x ψ T (t) m i=1 b i (x(t)) u i (t) x It is therefore to find ψ(t 0 ) so as to attain x(t f ). The time required to attain x(t f ) is the minimal time. 37/ 41
38 4.2 Time optimal control: linear systems Consider the system ẋ(t) = Ax(t) + Bu(t), 1 u i (t) 1. (39) Equations (38) become u i (t) = sign(ψ T (t)b i ) ẋ(t) = Ax(t) + Bu(t) (40) ψ(t) = A T ψ(t) 38/ 41
39 4.2 Time optimal control: linear systems The system is normal (i.e. there is no time interval [t 1, t 2 ] where ψ T b i = 0 ) if the system is controllable with respect to all input vector components u i. To check if a system is normal, it is sufficient to verify that ] rank [b i Ab i A 2 b i A n 1 b i = n for each control component. A linear time-invariant system of order n, normal and with real poles, switches at most n 1 times. Example: a system of order 2 = a single switching. 39/ 41
40 References H.J. Sussmann, J.C. Willems. 300 Years of Optimal Control: from the Brachystchrone to the Maximum Principle. IEEE Control Systems Magazine, June 1997, pp ( ~sussmann/currentpapers.html) B.D.O. Anderson, J.B. Moore. Optimal Control: Linear Quadratic Methods. Prentice-Hall International, London, 1989 ( papers/index.html) B.D.O. Anderson, J.B. Moore. Optimal Filtering. Dover Publications, New York, 2005 ( anu.edu.au/~john/papers/index.html) 40/ 41
41 References P. Varaiya. Lecture Notes on Optimization. University of California, Berkeley,1998 ( C. Jordan. Cours d Analyse de l Ecole Polytechnique, Tome III : Equations Différentielles. Paris, 1915 ( E. Trélat. Contrôle optimal : théorie & applications. Vuibert, Paris, 2005 ( membres/trelat/publications.html) 41/ 41
Chapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More informationm 5500 Calculus of Variations. Spring Andrej Cherkaev
m 5500 Calculus of Variations. Spring 2013. Andrej Cherkaev Introduction 1 Subjects of Calculus of variations Optimization The desire for optimality (perfection) is inherent in humans. The search for etremes
More informationOPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28
OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from
More informationHistorical notes on calculus
Historical notes on calculus Dr. Vladimir Dotsenko Dr. Vladimir Dotsenko Historical notes on calculus 1 / 9 Descartes: Describing geometric figures by algebraic formulas 1637: René Descartes publishes
More informationEE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games
EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games S. S. Sastry REVISED March 29th There exist two main approaches to optimal control and dynamic games: 1. via the Calculus of Variations
More informationYET ANOTHER ELEMENTARY SOLUTION OF THE BRACHISTOCHRONE PROBLEM
YET ANOTHER ELEMENTARY SOLUTION OF THE BRACHISTOCHRONE PROBLEM GARY BROOKFIELD In 1696 Johann Bernoulli issued a famous challenge to his fellow mathematicians: Given two points A and B in a vertical plane,
More informationLecture 4 Continuous time linear quadratic regulator
EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon
More informationLECTURE NOTES IN CALCULUS OF VARIATIONS AND OPTIMAL CONTROL MSc in Systems and Control. Dr George Halikias
Ver.1.2 LECTURE NOTES IN CALCULUS OF VARIATIONS AND OPTIMAL CONTROL MSc in Systems and Control Dr George Halikias EEIE, School of Engineering and Mathematical Sciences, City University 4 March 27 1. Calculus
More informationO1 History of Mathematics Lecture VI Successes of and difficulties with the calculus: the 18th-century beginnings of rigour
O1 History of Mathematics Lecture VI Successes of and difficulties with the calculus: the 18th-century beginnings of rigour Monday 22nd October 2018 (Week 3) Summary Publication and acceptance of the calculus
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More informationControl, Stabilization and Numerics for Partial Differential Equations
Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua
More informationFeedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field
Vol. 4, No. 4, 23 Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field Ashraf H. Owis Department of Astronomy, Space and Meteorology, Faculty of Science, Cairo University Department
More informationIntroduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University
Introduction to Optimal Control Theory and Hamilton-Jacobi equations Seung Yeal Ha Department of Mathematical Sciences Seoul National University 1 A priori message from SYHA The main purpose of these series
More informationS2500 ANALYSIS AND OPTIMIZATION, SUMMER 2016 FINAL EXAM SOLUTIONS
S25 ANALYSIS AND OPTIMIZATION, SUMMER 216 FINAL EXAM SOLUTIONS C.-M. MICHAEL WONG No textbooks, notes or calculators allowed. The maximum score is 1. The number of points that each problem (and sub-problem)
More informationOptimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:
Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic
More informationFormula Sheet for Optimal Control
Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming
More informationSubject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)
Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must
More informationInvariant Lagrangian Systems on Lie Groups
Invariant Lagrangian Systems on Lie Groups Dennis Barrett Geometry and Geometric Control (GGC) Research Group Department of Mathematics (Pure and Applied) Rhodes University, Grahamstown 6140 Eastern Cape
More informationOptimal Control Theory - Module 3 - Maximum Principle
Optimal Control Theory - Module 3 - Maximum Principle Fall, 215 - University of Notre Dame 7.1 - Statement of Maximum Principle Consider the problem of minimizing J(u, t f ) = f t L(x, u)dt subject to
More informationOptimal Control. McGill COMP 765 Oct 3 rd, 2017
Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationAn Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game
An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game Marzieh Khakestari (Corresponding author) Institute For Mathematical Research, Universiti Putra Malaysia, 43400
More informationThe Hybrid Maximum Principle is a consequence of Pontryagin Maximum Principle
The Hybrid Maximum Principle is a consequence of Pontryagin Maximum Principle A.V. Dmitruk, A.M. Kaganovich Lomonosov Moscow State University, Russia 119992, Moscow, Leninskie Gory, VMK MGU e-mail: dmitruk@member.ams.org,
More informationPontryagin s Minimum Principle 1
ECE 680 Fall 2013 Pontryagin s Minimum Principle 1 In this handout, we provide a derivation of the minimum principle of Pontryagin, which is a generalization of the Euler-Lagrange equations that also includes
More informationMODERN CONTROL DESIGN
CHAPTER 8 MODERN CONTROL DESIGN The classical design techniques of Chapters 6 and 7 are based on the root-locus and frequency response that utilize only the plant output for feedback with a dynamic controller
More informationLinear conic optimization for nonlinear optimal control
Linear conic optimization for nonlinear optimal control Didier Henrion 1,2,3, Edouard Pauwels 1,2 Draft of July 15, 2014 Abstract Infinite-dimensional linear conic formulations are described for nonlinear
More informationTopic # Feedback Control Systems
Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the
More informationRiccati Equations in Optimal Control Theory
Georgia State University ScholarWorks @ Georgia State University Mathematics Theses Department of Mathematics and Statistics 4-21-2008 Riccati Equations in Optimal Control Theory James Bellon Follow this
More informationTime-optimal control of a 3-level quantum system and its generalization to an n-level system
Proceedings of the 7 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 7 Time-optimal control of a 3-level quantum system and its generalization to an n-level
More informationNumerical Optimal Control Overview. Moritz Diehl
Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize
More informationOptimal Control of Differential Equations with Pure State Constraints
University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 8-2013 Optimal Control of Differential Equations with Pure State Constraints Steven Lee
More informationAn Introduction to a Rigorous Definition of Derivative
Ursinus College Digital Commons @ Ursinus College Analysis Transforming Instruction in Undergraduate Mathematics via Primary Historical Sources (TRIUMPHS) 017 An Introduction to a Rigorous Definition of
More informationAdvanced Mechatronics Engineering
Advanced Mechatronics Engineering German University in Cairo 21 December, 2013 Outline Necessary conditions for optimal input Example Linear regulator problem Example Necessary conditions for optimal input
More informationControl Theory Based Uncertainty Model in Reliability Applications
International Journal of Performability Engineering, Vol. 1, No. 5, July 214, pp. 477-486. RAMS Consultants Printed in India Control Theory Based Uncertainty Model in Reliability Applications VICTOR G.
More informationMATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem
MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear
More informationLinear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,
More informationOptimal Control. with. Aerospace Applications. James M. Longuski. Jose J. Guzman. John E. Prussing
Optimal Control with Aerospace Applications by James M. Longuski Jose J. Guzman John E. Prussing Published jointly by Microcosm Press and Springer 2014 Copyright Springer Science+Business Media New York
More informationTheory and Applications of Constrained Optimal Control Proble
Theory and Applications of Constrained Optimal Control Problems with Delays PART 1 : Mixed Control State Constraints Helmut Maurer 1, Laurenz Göllmann 2 1 Institut für Numerische und Angewandte Mathematik,
More informationOn linear quadratic optimal control of linear time-varying singular systems
On linear quadratic optimal control of linear time-varying singular systems Chi-Jo Wang Department of Electrical Engineering Southern Taiwan University of Technology 1 Nan-Tai Street, Yungkung, Tainan
More informationTutorial on Control and State Constrained Optimal Control Problems
Tutorial on Control and State Constrained Optimal Control Problems To cite this version:. blems. SADCO Summer School 211 - Optimal Control, Sep 211, London, United Kingdom. HAL Id: inria-629518
More information6. Linear Quadratic Regulator Control
EE635 - Control System Theory 6. Linear Quadratic Regulator Control Jitkomut Songsiri algebraic Riccati Equation (ARE) infinite-time LQR (continuous) Hamiltonian matrix gain margin of LQR 6-1 Algebraic
More informationDynamic Programming with Applications. Class Notes. René Caldentey Stern School of Business, New York University
Dynamic Programming with Applications Class Notes René Caldentey Stern School of Business, New York University Spring 2011 Prof. R. Caldentey Preface These lecture notes are based on the material that
More informationJoint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018
EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,
More informationGeometric Optimal Control with Applications
Geometric Optimal Control with Applications Accelerated Graduate Course Institute of Mathematics for Industry, Kyushu University, Bernard Bonnard Inria Sophia Antipolis et Institut de Mathématiques de
More informationQuadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
.. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if
More informationLinear Quadratic Optimal Control Topics
Linear Quadratic Optimal Control Topics Finite time LQR problem for time varying systems Open loop solution via Lagrange multiplier Closed loop solution Dynamic programming (DP) principle Cost-to-go function
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture
More informationApproximate Solutions to Nonlinear Optimal Control Problems in Astrodynamics
Approximate Solutions to Nonlinear Optimal Control Problems in Astrodynamics Francesco Topputo Politecnico di Milano, Milano, 2156, Italy Franco Bernelli-Zazzera Politecnico di Milano, Milano, 2156, Italy
More informationUsing Easy Java Simulations in Computer Supported Control Engineering Education
ELECTRONICS, VOL. 15, NO., DECEMBER 011 67 Using Easy Java Simulations in Computer Supported Control Engineering Education Milica B. Naumović, Nataša Popović, and Božidar Popović Abstract This paper presents
More informationOptimal Control Strategies in a Two Dimensional Differential Game Using Linear Equation under a Perturbed System
Leonardo Journal of Sciences ISSN 1583-33 Issue 1, January-June 8 p. 135-14 Optimal Control Strategies in a wo Dimensional Differential Game Using Linear Equation under a Perturbed System Department of
More informationINVERSION IN INDIRECT OPTIMAL CONTROL
INVERSION IN INDIRECT OPTIMAL CONTROL François Chaplais, Nicolas Petit Centre Automatique et Systèmes, École Nationale Supérieure des Mines de Paris, 35, rue Saint-Honoré 7735 Fontainebleau Cedex, France,
More informationRobotics. Islam S. M. Khalil. November 15, German University in Cairo
Robotics German University in Cairo November 15, 2016 Fundamental concepts In optimal control problems the objective is to determine a function that minimizes a specified functional, i.e., the performance
More informationFeynman s path integral approach to quantum physics and its relativistic generalization
Feynman s path integral approach to quantum physics and its relativistic generalization Jürgen Struckmeier j.struckmeier@gsi.de, www.gsi.de/ struck Vortrag im Rahmen des Winterseminars Aktuelle Probleme
More informationOptimal Control, TSRT08: Exercises. This version: October 2015
Optimal Control, TSRT8: Exercises This version: October 215 Preface Acknowledgment Abbreviations The exercises are modified problems from different sources: Exercises 2.1, 2.2,3.5,4.1,4.2,4.3,4.5, 5.1a)
More informationConsensus Control of Multi-agent Systems with Optimal Performance
1 Consensus Control of Multi-agent Systems with Optimal Performance Juanjuan Xu, Huanshui Zhang arxiv:183.941v1 [math.oc 6 Mar 18 Abstract The consensus control with optimal cost remains major challenging
More informationA Geometric Analysis of Bang-Bang Extremals in Optimal Control Problems for Combination Cancer Chemotherapy*
A Geometric Analysis of Bang-Bang Extremals in Optimal Control Problems for Combination Cancer Chemotherapy* Heinz Schättler Dept. of Electrical and Systems Engineering, Washington University, St. Louis,
More informationResearch Article Approximate Solutions to Nonlinear Optimal Control Problems in Astrodynamics
ISRN Aerospace Engineering Volume 213, Article ID 95912, 7 pages http://dx.doi.org/1.1155/213/95912 Research Article Approximate Solutions to Nonlinear Optimal Control Problems in Astrodynamics Francesco
More informationEE C128 / ME C134 Final Exam Fall 2014
EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket
More informationBeyond Newton and Leibniz: The Making of Modern Calculus. Anthony V. Piccolino, Ed. D. Palm Beach State College Palm Beach Gardens, Florida
Beyond Newton and Leibniz: The Making of Modern Calculus Anthony V. Piccolino, Ed. D. Palm Beach State College Palm Beach Gardens, Florida Calculus Before Newton & Leibniz Four Major Scientific Problems
More information14 Riccati Equations and their Solution
4 Riccati Equations and their Solution Vladimír Kučera Czech Technical University and Institute of Information Theory and Automation 4. Introduction...4-4.2 Optimal Control and Filtering: Motivation...4-2
More informationTheory and Applications of Optimal Control Problems with Time-Delays
Theory and Applications of Optimal Control Problems with Time-Delays University of Münster Institute of Computational and Applied Mathematics South Pacific Optimization Meeting (SPOM) Newcastle, CARMA,
More informationNumerical Methods for Constrained Optimal Control Problems
Numerical Methods for Constrained Optimal Control Problems Hartono Hartono A Thesis submitted for the degree of Doctor of Philosophy School of Mathematics and Statistics University of Western Australia
More informationORDINARY DIFFERENTIAL EQUATIONS
ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 48824. JANUARY 3, 25 Summary. This is an introduction to ordinary differential equations.
More informationInstituto Superior Técnico. Calculus of variations and Optimal Control. Problem Series nº 2
Instituto Superior Técnico Calculus of variations and Optimal Control Problem Series nº P1. Snell s law of refraction and Fermat s Principle of Least Time Consider the situation shown in figure P1-1 where
More informationThe Euler Method for Linear Control Systems Revisited
The Euler Method for Linear Control Systems Revisited Josef L. Haunschmied, Alain Pietrus, and Vladimir M. Veliov Research Report 2013-02 March 2013 Operations Research and Control Systems Institute of
More informationLinear Quadratic Zero-Sum Two-Person Differential Games
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,
More informationOsinga, HM., & Hauser, J. (2005). The geometry of the solution set of nonlinear optimal control problems.
Osinga, HM., & Hauser, J. (2005). The geometry of the solution set of nonlinear optimal control problems. Early version, also known as pre-print Link to publication record in Explore Bristol Research PDF-document
More informationThe Inverted Pendulum
Lab 1 The Inverted Pendulum Lab Objective: We will set up the LQR optimal control problem for the inverted pendulum and compute the solution numerically. Think back to your childhood days when, for entertainment
More informationJohann Bernoulli ( )he established the principle of the so-called virtual work for static systems.
HISTORICAL SURVEY Wilhelm von Leibniz (1646-1716): Leibniz s approach to mechanics was based on the use of mathematical operations with the scalar quantities of energy, as opposed to the vector quantities
More informationDirect Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.
Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple
More informationLeibniz and the Discovery of Calculus. The introduction of calculus to the world in the seventeenth century is often associated
Leibniz and the Discovery of Calculus The introduction of calculus to the world in the seventeenth century is often associated with Isaac Newton, however on the main continent of Europe calculus would
More informationAdvanced Classical Mechanics (NS-350B) - Lecture 3. Alessandro Grelli Institute for Subatomic Physics
Advanced Classical Mechanics (NS-350B) - Lecture 3 Alessandro Grelli Institute for Subatomic Physics Outline Part I Variational problems Euler-Lagrange equation Fermat principle Brachistochrone problem
More informationORDINARY DIFFERENTIAL EQUATIONS
ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 4884 NOVEMBER 9, 7 Summary This is an introduction to ordinary differential equations We
More informationOn Some Optimal Control Problems for Electric Circuits
On Some Optimal Control Problems for Electric Circuits Kristof Altmann, Simon Stingelin, and Fredi Tröltzsch October 12, 21 Abstract Some optimal control problems for linear and nonlinear ordinary differential
More informationEE C128 / ME C134 Feedback Control Systems
EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of
More informationCHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67
CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal
More informationCalculus of Variations
ECE 68 Midterm Exam Solution April 1, 8 1 Calculus of Variations This exam is open book and open notes You may consult additional references You may even discuss the problems (with anyone), but you must
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationLecture 6, September 1, 2017
Engineering Mathematics Fall 07 Lecture 6, September, 07 Escape Velocity Suppose we have a planet (or any large near to spherical heavenly body) of radius R and acceleration of gravity at the surface of
More informationOptimal control of culling in epidemic models for wildlife
Optimal control of culling in epidemic models for wildlife Maria Groppi, Valentina Tessoni, Luca Bolzoni, Giulio De Leo Dipartimento di Matematica, Università degli Studi di Parma Dipartimento di Scienze
More informationExtremal Trajectories for Bounded Velocity Differential Drive Robots
Extremal Trajectories for Bounded Velocity Differential Drive Robots Devin J. Balkcom Matthew T. Mason Robotics Institute and Computer Science Department Carnegie Mellon University Pittsburgh PA 523 Abstract
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationDuality and dynamics in Hamilton-Jacobi theory for fully convex problems of control
Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory
More informationStochastic and Adaptive Optimal Control
Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal
More information14 F Time Optimal Control
14 F Time Optimal Control The main job of an industrial robot is to move an object on a pre-specified path, rest to rest, repeatedly. To increase productivity, the robot should do its job in minimum time.
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationBACKGROUND IN SYMPLECTIC GEOMETRY
BACKGROUND IN SYMPLECTIC GEOMETRY NILAY KUMAR Today I want to introduce some of the symplectic structure underlying classical mechanics. The key idea is actually quite old and in its various formulations
More informationME 233, UC Berkeley, Spring Background Parseval s Theorem Frequency-shaped LQ cost function Transformation to a standard LQ
ME 233, UC Berkeley, Spring 214 Xu Chen Lecture 1: LQ with Frequency Shaped Cost Function FSLQ Background Parseval s Theorem Frequency-shaped LQ cost function Transformation to a standard LQ Big picture
More informationThe Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications
MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department
More informationBalancing of Lossless and Passive Systems
Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.
More informationGame Theory Extra Lecture 1 (BoB)
Game Theory 2014 Extra Lecture 1 (BoB) Differential games Tools from optimal control Dynamic programming Hamilton-Jacobi-Bellman-Isaacs equation Zerosum linear quadratic games and H control Baser/Olsder,
More informationProblem 1 Cost of an Infinite Horizon LQR
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework
More informationState Variable Analysis of Linear Dynamical Systems
Chapter 6 State Variable Analysis of Linear Dynamical Systems 6 Preliminaries In state variable approach, a system is represented completely by a set of differential equations that govern the evolution
More informationLecture 10: Singular Perturbations and Averaging 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 10: Singular Perturbations and
More informationECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming
ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear
More informationOPTIMAL CONTROL CHAPTER INTRODUCTION
CHAPTER 3 OPTIMAL CONTROL What is now proved was once only imagined. William Blake. 3.1 INTRODUCTION After more than three hundred years of evolution, optimal control theory has been formulated as an extension
More informationLINEAR QUADRATIC OPTIMAL CONTROL BASED ON DYNAMIC COMPENSATION. Received October 2010; revised March 2011
International Journal of Innovative Computing, Information and Control ICIC International c 22 ISSN 349-498 Volume 8, Number 5(B), May 22 pp. 3743 3754 LINEAR QUADRATIC OPTIMAL CONTROL BASED ON DYNAMIC
More informationTutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints
Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer
More informationLecture Notes for PHY 405 Classical Mechanics
Lecture Notes for PHY 45 Classical Mechanics From Thorton & Marion s Classical Mechanics Prepared by Dr. Joseph M. Hahn Saint Mary s University Department of Astronomy & Physics October 11, 25 Chapter
More informationDeterministic Optimal Control
page A1 Online Appendix A Deterministic Optimal Control As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein
More information