Chapter 5. Pontryagin s Minimum Principle (Constrained OCP)
|
|
- Samuel Payne
- 5 years ago
- Views:
Transcription
1 Chapter 5 Pontryagin s Minimum Principle (Constrained OCP) 1
2 Pontryagin s Minimum Principle Plant: (5-1) u () t U PI: (5-2) Boundary condition: The goal is to find Optimal Control. 2
3 Pontryagin s Minimum Principle Solution: Form Pontryagin H Function: To find the optimal control, we must minimize H w.r.t as And solve the set of 2n state and costate equations (5-3) (5-4) 3
4 Pontryagin s Minimum Principle With boundary condition: (5-5) Note: 1- Eq. (5-3) is valid for both constrained and unconstrained control systems. 2- The optimality conditions given in (5-3) to (5-5) are necessary condition for optimality. 3- Sufficient condition for unconstrained case is: :must be positive definite 4
5 Additional Results: 1. If t f is fixed and H does not depend on time t explicitly, then Hamiltonian H must be constant when evaluated along the optimal trajectory. (5-6) 2. If the final time t f is free and the Hamiltonian does not depend explicitly on time t, then the Hamiltonian must be identically zero when evaluated along the optimal trajectory; that is, (5-7) 5
6 Constrained Time Optimal Control (TOC) of LTI systems Plant: (5-8) x(t): n-state vector, u(t): r-input vector, Assumptions: 1. System is completely controllable; i.e, the rank of the following matrix is n: (5-9) 2. Constraints on control : (5-10) 6
7 Constrained Time Optimal Control (TOC) of LTI systems or component wise: (5-11) By absorbing U into the matrix B or component wise: 3. Boundary condition: t0 0 t f t f is free. (5-12) x( ) x, x( ) 0 7
8 Constrained Time Optimal Control (TOC) of LTI systems Problem: Find the (optimal) control u*(t) which satisfies the constraint (5-12) and drives the system (5-8) from the initial state x(t 0 ) to the origin 0 in minimum time. Solution: Step1: Performance Index Step2: Hamiltonian: (5-13) (5-14) 8
9 Constrained Time Optimal Control (TOC) of LTI systems Step 3: State & Costate equations: (5-15) Boundary conditions: (5-16) (t f is free. ) Step 4: Optimal Condition (by minimizing H w.r.t u(t)): (5-17) 9
10 Constrained Time Optimal Control (TOC) of LTI systems where (5-18) Step 5: Optimal Control * If q ( t ) 0 min T * u ( t ) q ( t ) * * q ( t ) q ( t ) u( t) 1 * If q ( t ) 0 min T * u ( t ) q ( t ) * * q ( t ) q ( t ) u( t) 1 10
11 Constrained Time Optimal Control (TOC) of LTI systems (5-19) ** Signum function: f o sgn f i 11
12 Constrained Time Optimal Control (TOC) of LTI systems In component wise: 12
13 Constrained Time Optimal Control (TOC) of LTI systems Step 6: types of TOC 1) Normal TOC (NTOC) Suppose during [t 0, t f ], there exists a set of times 13
14 Constrained Time Optimal Control (TOC) of LTI systems u*(t) is piecewise constant function with simple switching at t 1,t 2,t 3,t 4. Thus, the optimal control u * j () t switches four times (number of switchings=4) 14
15 Constrained Time Optimal Control (TOC) of LTI systems 2) Singular TOC (STOC) suppose during [t 0, t f ], there is one (or more) subintervals [T 1, T 2 ] such that ([T 1, T 2 ] is so-called singular interval) 15
16 Constrained Time Optimal Control (TOC) of Step 7: Bang-Bang Control Law: LTI systems For NTOC: Step 8: Conditions for NTOC systems: 16
17 Constrained Time Optimal Control (TOC) of LTI systems Step 9: Uniqueness of optimal control: if system is NTOC, then Time Optimal Control is unique. Step 10: Number of switches 17
18 Example: Double Integrator System: f(t) m x(t) Assume: u( t) 1 t t0, t f 1 f () t m Problem: Find the admissible optimal control that forces the system from any initial state x(0) to the origin in minimum time. mx x x 2 x x x ( t) x ( t) 1 2 x ( t) u( t) 2 f () t Solution: Let we are dealing with NTOC. Step1: Step2: 18
19 Example: Double Integrator Step3: Minimization of Hamiltonian: Step4: Costate solution 19
20 Example: Double Integrator Step5: Time-Optimal Control Sequences: that satisfy Note that sequences like can not be in the above group. Since they violate the recent Theorem. Since number of switchings=2. From the recent equation, there are four possible solutions: 20
21 Example: Double Integrator Step5: Time-Optimal Control Sequences: that satisfy Note that sequences like The recent Theorem. Since number of switchings=2. From the recent equation, there are four possible solutions: can not be in the above group. Since they violate 21
22 Example: Double Integrator Step5: Time-Optimal Control Sequences: that satisfy Note that sequences like The recent Theorem. Since number of switchings=2. From the recent equation, there are four possible solutions: can not be in the above group. Since they violate 22
23 Example: Double Integrator Step5: Time-Optimal Control Sequences: that satisfy Note that sequences like The recent Theorem. Since number of switchings=2. can not be in the above group. Since they violate From the recent equation, there are four possible solutions: 23
24 Example: Double Integrator Step6: State Trajectories: where: For phase plots, we need to eliminate t If : If: where: A family of parabolas: 24
25 Example: Double Integrator Our aim is to drive the system from any initial state (x 1 (0),x 2 (0)) to origin (0,0) in minimum time. At t=t f : x 1 (t f )=0, x 2 (t f )=0 25
26 Example: Double Integrator rewriting this for any initial state x 1 =x 10,x 2 =x 20 Step 7: switch curve: In the recent figure, two curves γ +, γ - transfer any initial state (x 1,x 2 ) to origin (0,0). 26
27 Example: Double Integrator Switch curve: 27
28 Example: Double Integrator Step 8: Phase plane regions: defining the regions in which we need to apply u=+1 or u=-1 28
29 Example: Double Integrator Step 8: Phase plane regions: defining the regions in which we need to apply u=+1 or u=-1 29
30 Example: Double Integrator Step 8: Phase plane regions: defining the regions in which we need to apply u=+1 or u=-1 30
31 Example: Double Integrator Step 8: Phase plane regions: defining the regions in which we need to apply u=+1 or u=-1 31
32 Example: Double Integrator Step9: Control law: Define: 32
33 Example: Double Integrator Step 10 : Implementation of control law Step 11 : Minimum time 33
34 Chapter Dynamic Programming 6 34
35 Different Types Dynamic Programming (DP): Continuous DP Discrete DP 35
36 Continuous DP 36
37 Hamilton-Jacobi-Bellman(HJB) Equation HJB equation is the continuous analog the discrete DP algorithm which yields a closed loop optimal control system. Plant: PI: t f is fixed, We attempt to determine the controls that cause system to be controlled and that minimize J for all admissible u(t); Solution: Step1: Define Hamiltonian: where J * x * J x 37
38 Hamilton-Jacobi-Bellman(HJB) Equation Step2: Minimize H w.r.t u(t) Step3: Using the above equation, we can find H * Step4: To obtain the optimal control, the following condition (HJB equation) must be satisfied 38
39 Hamilton-Jacobi-Bellman(HJB) Equation Step5: Using the solution J*, from step 4 to evaluate * J x and substitute into the expression for u * (t) of step 2, to obtain the optimal control. Example: 39
40 Hamilton-Jacobi-Bellman(HJB) Equation Form Optimal Hamiltonian: HJB equation: With boundary condition: 40
41 Hamilton-Jacobi-Bellman(HJB) Equation One way to solve HJB: guess a form for the solution assume: (**) Boundary condition: From (**) Also, the optimal control become: (##) Substituting (##) into HJB: 41
42 Hamilton-Jacobi-Bellman(HJB) Equation Step5 : Optimal Control u * ( t) J p( t) x( t) x Also, in the step 4, as t f u * ( t) ( 5 2) x( t) Substituting in system equation: x( t) 2 x( t) 5 x( t) 2 x( t) 5 x( t) system is satble. 42
43 LQR Systems Using HJB Equation System: PI: Step1: Form Hamiltonian: Step2: Minimize H w.r.t u(t) 43
44 LQR Systems Using HJB Equation Step3: Optimal Hamiltonian Step4: HJB equation 44
45 LQR Systems Using HJB Equation To solve HJB, let us guess the solution where P(t) is positive definite matrix function After substituting in HJB: With boundary condition: 45
46 LQR Systems Using HJB Equation Step5: Optimal control Exercise: Using the above algorithm, solve the LQR problem for the following system: 46
47 Discrete DP DP, proposed by Bellman, is based on a simple concept called Principle of Optimality. 47
48 Principle of Optimality Consider a multiple decision optimization problem: C A B J AC : Cost function for the segment AC J CB : Cost function for the segment CB Optimizing cost for the entire segment AB: J AB =J AC +J CB If J AC is the optimal cost of segment AC on the entire optimal path AB, then J CB is the optimal cost of the remaining segment CB. In other words, one can break the total optimal path into smaller segments which are themselves optimal. 48
49 Principle of Optimality Principle of Optimality: An optimal policy has the property that whatever the previous state and decision(i.e., control), the remaining decision must constitute an optimal policy with regard to the state resulting from previous decision. For better understanding, consider a multiple decision process: For example: An aircraft routing network 49
50 Dynamic Programming Applied to a Routing Problem Problem: Finding the most economical rout to fly from city A to city B. In each node (state) the decision (control) is made. 50
51 Dynamic programming applied to a Routing Problem Let: Decision (control) Move u=+1 Up & Left (, ) u=-1 Down & Right(, ) Here we have 5 stages starting k=0 to k=n=4. It looks natural to start working backward from the final stage or point. Stage 5: k=kf=n=4; This is just the starting point, is only one city B and hence there is no cost involved. Stage 4: k=3; Two cities in this stage H I route from this stage to stage 5., we need to find the most economical H B : ( u 1),cost 2, I B : ( u 1),cost 3.
52 Dynamic programming applied to a Routing Problem Stage 3: k=2 from E,F,G we can fly to H,I: E B: (u=-1) total cost=2+4= F H B (u=+1) F I B (u=-1) G I B (u=+1) total cost=2+3=5; total cost=3+5=8; total cost=9 The optimal cost to fly is 5 Stage 2: k=1 Similar stage 3: the possible path: From C to B From D to B total cost=9 total cost=7
53 Dynamic programming applied to a Routing Problem Stage 1: k=0 From A to C to B From A to D to B Optimal solution: total cost=11 total cost=11 A C F H B The economical route A D F H B Minimum cost=11
54 Dynamic Programming On OCP Example: State and control constraints: PI: Problem: Drive the system so that the final state x(t) close to zero with minimum consuming control effort. Solution: First: Discertization: dividing interval into N equal segments, t 1 2 N 0 t 2 t N t=t 54
55 Dynamic Programming On OCP Let t is small enough so that the control signal can be approximated by a piecewise constant functions that changes only at instants: for t=k t: x(k t) is referred to as the kth value of x and is denoted by x(k) 55
56 Dynamic Programming On OCP In a similar way: Let a=0,b=1,λ=2,t=2, k=0, u(0) u(1), 56
57 Dynamic Programming On OCP The first step is to find the optimal policy for the last stage of operation. Trying all of the allowable control values at each of the allowable state values. The optimal control for each state value is the one which yields the trajectory having the minimum cost. To limit the required number of calculations, x,u must be quantized. 57
58 k=1 Dynamic Programming On OCP J 12 : cost for joining from state x(1) to x(2),, 58
59 k=0 Dynamic Programming On OCP J 01 : cost for joining from state x(0) to x(1) J ( x (0)), u ( x (0),0) In optimal case: * * 01 C ( x(0), u(0)) J ( x(0), u(0)) J ( x(1)), J ( x(0)) min[ J ( x(0), u(0)) J ( x(1))] * * *
60 Dynamic Programming On OCP For example let: x(0)=1.5 u (1.5, 0) 0.5, J (1.5) 1.25 * * 02 * using u (1.5, 0) at x(0) 1.5 x(1) 1 * in the above first table : u (1,1) 0.5 Optimal control sequence:= 0.5, 0.5 and minimum cost=
61 Interpolation Suppose trial values of u(k): -1, -0.75, -0.5,-0.25,0, 0.25, 0.5, 0.75, 1 J ( x(1)), u ( x(1),1) Suppose x(0)=1.5 * * 12 Two points in x(1) do not coincide with the grid points (-1, -0.5, 0, 0.5, 1) By linear interpolation: 61
62 Interpolation Following table 62
63 The END Good Luck 63
Hamilton-Jacobi-Bellman Equation Feb 25, 2008
Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationCHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67
CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More informationOPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28
OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from
More informationPontryagin s Minimum Principle 1
ECE 680 Fall 2013 Pontryagin s Minimum Principle 1 In this handout, we provide a derivation of the minimum principle of Pontryagin, which is a generalization of the Euler-Lagrange equations that also includes
More informationConstrained Optimal Control. Constrained Optimal Control II
Optimal Control, Guidance and Estimation Lecture 35 Constrained Optimal Control II Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore opics: Constrained Optimal
More informationOptimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:
Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic
More informationPontryagin s maximum principle
Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin
More informationModule 05 Introduction to Optimal Control
Module 05 Introduction to Optimal Control Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html October 8, 2015
More informationESC794: Special Topics: Model Predictive Control
ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department
More informationOptimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.
Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation
More informationRobotics: Science & Systems [Topic 6: Control] Prof. Sethu Vijayakumar Course webpage:
Robotics: Science & Systems [Topic 6: Control] Prof. Sethu Vijayakumar Course webpage: http://wcms.inf.ed.ac.uk/ipab/rss Control Theory Concerns controlled systems of the form: and a controller of the
More informationNumerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle
Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle Ph.D. course in OPTIMAL CONTROL Emiliano Cristiani (IAC CNR) e.cristiani@iac.cnr.it
More informationMATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem
MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear
More informationOptimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations
Martino Bardi Italo Capuzzo-Dolcetta Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations Birkhauser Boston Basel Berlin Contents Preface Basic notations xi xv Chapter I. Outline
More informationNumerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini
Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33
More information7 OPTIMAL CONTROL 7.1 EXERCISE 1. Solve the following optimal control problem. max. (x u 2 )dt x = u x(0) = Solution with the first variation
7 OPTIMAL CONTROL 7. EXERCISE Solve the following optimal control problem max 7.. Solution with the first variation The Lagrangian is L(x, u, λ, μ) = (x u )dt x = u x() =. [(x u ) λ(x u)]dt μ(x() ). Now
More informationOptimal Control Theory
Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal
More informationNumerical Optimal Control Overview. Moritz Diehl
Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko
ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko Continuous-time optimization involves maximization wrt to an innite dimensional object, an entire
More informationStatic and Dynamic Optimization (42111)
Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:
More informationEE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games
EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games S. S. Sastry REVISED March 29th There exist two main approaches to optimal control and dynamic games: 1. via the Calculus of Variations
More informationOptimal Control. McGill COMP 765 Oct 3 rd, 2017
Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture
More informationMinimum Fuel Optimal Control Example For A Scalar System
Minimum Fuel Optimal Control Example For A Scalar System A. Problem Statement This example illustrates the minimum fuel optimal control problem for a particular first-order (scalar) system. The derivation
More informationOptimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications
Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications Navin Khaneja lectre notes taken by Christiane Koch Jne 24, 29 1 Variation yields a classical Hamiltonian system Sppose that
More informationpacman A classical arcade videogame powered by Hamilton-Jacobi equations Simone Cacace Universita degli Studi Roma Tre
HJ pacman A classical arcade videogame powered by Hamilton-Jacobi equations Simone Cacace Universita degli Studi Roma Tre Control of State Constrained Dynamical Systems 25-29 September 2017, Padova Main
More informationMATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012
MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012 Unit Outline Introduction to the course: Course goals, assessment, etc... What is Control Theory A bit of jargon,
More informationThe HJB-POD approach for infinite dimensional control problems
The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,
More informationRobotics. Control Theory. Marc Toussaint U Stuttgart
Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,
More informationTime-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationStochastic and Adaptive Optimal Control
Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal
More informationPreface Acknowledgments
Contents Preface Acknowledgments xi xix Chapter 1: Brief review of static optimization methods 1 1.1. Introduction: Significance of Mathematical Models 1 1.2. Unconstrained Problems 4 1.3. Equality Constraints
More informationECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming
ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear
More informationA Gauss Lobatto quadrature method for solving optimal control problems
ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes
More informationCDS 110b: Lecture 2-1 Linear Quadratic Regulators
CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationDissipativity. Outline. Motivation. Dissipative Systems. M. Sami Fadali EBME Dept., UNR
Dissipativity M. Sami Fadali EBME Dept., UNR 1 Outline Differential storage functions. QSR Dissipativity. Algebraic conditions for dissipativity. Stability of dissipative systems. Feedback Interconnections
More informationAn adaptive RBF-based Semi-Lagrangian scheme for HJB equations
An adaptive RBF-based Semi-Lagrangian scheme for HJB equations Roberto Ferretti Università degli Studi Roma Tre Dipartimento di Matematica e Fisica Numerical methods for Hamilton Jacobi equations in optimal
More informationDepartment of Electronics and Instrumentation Engineering M. E- CONTROL AND INSTRUMENTATION ENGINEERING CL7101 CONTROL SYSTEM DESIGN Unit I- BASICS AND ROOT-LOCUS DESIGN PART-A (2 marks) 1. What are the
More informationDYNAMIC MODEL OF URBAN TRAFFIC AND OPTIMUM MANAGEMENT OF ITS FLOW AND CONGESTION
Dynamic Systems and Applications 26 (2017) 575-588 DYNAMIC MODEL OF URBAN TRAFFIC AND OPTIMUM MANAGEMENT OF ITS FLOW AND CONGESTION SHI AN WANG AND N. U. AHMED School of Electrical Engineering and Computer
More informationWe now impose the boundary conditions in (6.11) and (6.12) and solve for a 1 and a 2 as follows: m 1 e 2m 1T m 2 e (m 1+m 2 )T. (6.
158 6. Applications to Production And Inventory For ease of expressing a 1 and a 2, let us define two constants b 1 = I 0 Q(0), (6.13) b 2 = ˆP Q(T) S(T). (6.14) We now impose the boundary conditions in
More informationGENG2140, S2, 2012 Week 7: Curve fitting
GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and
More informationCHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73
CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly
More informationOn-off Control: Audio Applications
On-off Control: Audio Applications Graham C. Goodwin Day 4: Lecture 3 16th September 2004 International Summer School Grenoble, France 1 Background In this lecture we address the issue of control when
More informationOptimal Perimeter Control Synthesis for Two Urban Regions with Boundary Queue Dynamics
Optimal Perimeter Control Synthesis for Two Urban Regions with Boundary Queue Dynamics Jack Haddad Technion Sustainable Mobility and Robust Transportation (T-SMART) Lab Faculty of Civil and Environmental
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationControl of Dynamical System
Control of Dynamical System Yuan Gao Applied Mathematics University of Washington yuangao@uw.edu Spring 2015 1 / 21 Simplified Model for a Car To start, let s consider the simple example of controlling
More informationControl Systems Design
ELEC4410 Control Systems Design Lecture 14: Controllability Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 14: Controllability p.1/23 Outline
More informationOptimal Control of Differential Equations with Pure State Constraints
University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 8-2013 Optimal Control of Differential Equations with Pure State Constraints Steven Lee
More informationTopic # Feedback Control Systems
Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the
More informationFinal Exam January 26th, Dynamic Programming & Optimal Control ( ) Solutions. Number of Problems: 4. No calculators allowed.
Final Exam January 26th, 2017 Dynamic Programming & Optimal Control (151-0563-01) Prof. R. D Andrea Solutions Exam Duration: 150 minutes Number of Problems: 4 Permitted aids: One A4 sheet of paper. No
More informationTrajectory-based optimization
Trajectory-based optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 6 1 / 13 Using
More informationTime-Invariant Linear Quadratic Regulators!
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationSuboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids
Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Jochen Garcke joint work with Axel Kröner, INRIA Saclay and CMAP, Ecole Polytechnique Ilja Kalmykov, Universität
More informationUCLA Chemical Engineering. Process & Control Systems Engineering Laboratory
Constrained Innite-time Optimal Control Donald J. Chmielewski Chemical Engineering Department University of California Los Angeles February 23, 2000 Stochastic Formulation - Min Max Formulation - UCLA
More informationOverview of the Seminar Topic
Overview of the Seminar Topic Simo Särkkä Laboratory of Computational Engineering Helsinki University of Technology September 17, 2007 Contents 1 What is Control Theory? 2 History
More informationSpeed Profile Optimization for Optimal Path Tracking
Speed Profile Optimization for Optimal Path Tracking Yiming Zhao and Panagiotis Tsiotras Abstract In this paper, we study the problem of minimumtime, and minimum-energy speed profile optimization along
More informationEnergy Optimization in Process Systems. Warsaw University of TechnoLogy, Facuity of Chemical and Process Engineering, Warsaw, Poland ELSEVIER
Energy Optimization in Process Systems First Edition Stanistaw Sieniutycz Warsaw University of TechnoLogy, Facuity of Chemical and Process Engineering, Warsaw, Poland Jacek Jekowski Rzeszöw University
More informationSemi-decidable Synthesis for Triangular Hybrid Systems
Semi-decidable Synthesis for Triangular Hybrid Systems Omid Shakernia 1, George J. Pappas 2, and Shankar Sastry 1 1 Department of EECS, University of California at Berkeley, Berkeley, CA 94704 {omids,sastry}@eecs.berkeley.edu
More informationA Survey of Computational High Frequency Wave Propagation II. Olof Runborg NADA, KTH
A Survey of Computational High Frequency Wave Propagation II Olof Runborg NADA, KTH High Frequency Wave Propagation CSCAMM, September 19-22, 2005 Numerical methods Direct methods Wave equation (time domain)
More informationThe Linear Quadratic Regulator
10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.
More informationComputational Issues in Nonlinear Dynamics and Control
Computational Issues in Nonlinear Dynamics and Control Arthur J. Krener ajkrener@ucdavis.edu Supported by AFOSR and NSF Typical Problems Numerical Computation of Invariant Manifolds Typical Problems Numerical
More informationApplication of dynamic programming approach to aircraft take-off in a windshear
Application of dynamic programming approach to aircraft take-off in a windshear N. D. Botkin, V. L. Turova Technische Universität München, Mathematics Centre Chair of Mathematical Modelling 10th International
More informationProblem 1 Cost of an Infinite Horizon LQR
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework
More informationRecent Advances in State Constrained Optimal Control
Recent Advances in State Constrained Optimal Control Richard B. Vinter Imperial College London Control for Energy and Sustainability Seminar Cambridge University, 6th November 2009 Outline of the talk
More informationOptimal Control of Spacecraft Orbital Maneuvers by the Hamilton-Jacobi Theory
AIAA Guidance, Navigation, and Control Conference and Exhibit 2-24 August 26, Keystone, Colorado AIAA 26-6234 Optimal Control of Spacecraft Orbital Maneuvers by the Hamilton-Jacobi Theory Chandeok Park,
More informationarxiv: v1 [quant-ph] 3 Jul 2008 I. INTRODUCTION
Gate complexity using Dynamic Programming Srinivas Sridharan, 1 Mile Gu, 2 and Matthew R. James 1 1 Department of Engineering, Australian National University, Canberra, ACT 2, Australia. 2 Department of
More informationOptimal control of nonlinear systems with input constraints using linear time varying approximations
ISSN 1392-5113 Nonlinear Analysis: Modelling and Control, 216, Vol. 21, No. 3, 4 412 http://dx.doi.org/1.15388/na.216.3.7 Optimal control of nonlinear systems with input constraints using linear time varying
More informationIntroduction to Reachability Somil Bansal Hybrid Systems Lab, UC Berkeley
Introduction to Reachability Somil Bansal Hybrid Systems Lab, UC Berkeley Outline Introduction to optimal control Reachability as an optimal control problem Various shades of reachability Goal of This
More informationMinimization with Equality Constraints!
Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 State and Control Equality Constraints Pseudoinverse State and Control Inequality
More informationPrashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles
HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,
More informationNumerical Computational Techniques for Nonlinear Optimal Control
Seminar at LAAS Numerical Computational echniques for Nonlinear Optimal Control Yasuaki Oishi (Nanzan University) July 2, 2018 * Joint work with N. Sakamoto and. Nakamura 1. Introduction Optimal control
More informationESC794: Special Topics: Model Predictive Control
ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time
More informationHomework Solution # 3
ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found
More informationReview of Optimization Methods
Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationEE C128 / ME C134 Feedback Control Systems
EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of
More informationExam. 135 minutes, 15 minutes reading time
Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.
More informationMaximum Process Problems in Optimal Control Theory
J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard
More information1. Type your solutions. This homework is mainly a programming assignment.
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K S # 6 + 7 Ahmad F. Taha October 22, 2015 READ Homework Instructions: 1. Type your solutions. This homework
More informationOPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS
STUDIES IN ASTRONAUTICS 3 OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS JOHNL.JUNKINS Texas A&M University, College Station, Texas, U.S.A. and JAMES D.TURNER Cambridge Research, Division of PRA, Inc., Cambridge,
More informationElements of Optimal Control Theory Pontryagin s Principle
Elements of Optimal Control Theory Pontryagin s Principle STATEMENT OF THE CONTROL PROBLEM Given a system of ODEs (x 1,...,x n are state variables, u is the control variable) 8 dx 1 = f 1 (x 1,x 2,,x n
More informationUCLA Chemical Engineering. Process & Control Systems Engineering Laboratory
Constrained Innite-Time Nonlinear Quadratic Optimal Control V. Manousiouthakis D. Chmielewski Chemical Engineering Department UCLA 1998 AIChE Annual Meeting Outline Unconstrained Innite-Time Nonlinear
More informationMA 323 Geometric Modelling Course Notes: Day 07 Parabolic Arcs
MA 323 Geometric Modelling Course Notes: Day 07 Parabolic Arcs David L. Finn December 9th, 2004 We now start considering the basic curve elements to be used throughout this course; polynomial curves and
More informationContinuous Time Finance
Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale
More informationarxiv: v1 [q-fin.mf] 5 Jul 2016
arxiv:607.037v [q-fin.mf] 5 Jul 206 Dynamic optimization and its relation to classical and quantum constrained systems. Mauricio Contreras, Rely Pellicer and Marcelo Villena. July 6, 206 We study the structure
More informationIntroduction to optimal control theory in continuos time (with economic applications) Salvatore Federico
Introduction to optimal control theory in continuos time (with economic applications) Salvatore Federico June 26, 2017 2 Contents 1 Introduction to optimal control problems in continuous time 5 1.1 From
More informationLinear Quadratic Optimal Control Topics
Linear Quadratic Optimal Control Topics Finite time LQR problem for time varying systems Open loop solution via Lagrange multiplier Closed loop solution Dynamic programming (DP) principle Cost-to-go function
More informationReal Time Stochastic Control and Decision Making: From theory to algorithms and applications
Real Time Stochastic Control and Decision Making: From theory to algorithms and applications Evangelos A. Theodorou Autonomous Control and Decision Systems Lab Challenges in control Uncertainty Stochastic
More informationNumerical Optimal Control (preliminary and incomplete draft) Moritz Diehl and Sébastien Gros
Numerical Optimal Control (preliminary and incomplete draft) Moritz Diehl and Sébastien Gros May 17, 2017 (Ch. 1-4 nearly ready for study by NOCSE students) Contents Preface page v 1 Introduction: Dynamic
More informationEE221A Linear System Theory Final Exam
EE221A Linear System Theory Final Exam Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2016 12/16/16, 8-11am Your answers must be supported by analysis,
More informationSuboptimal feedback control of PDEs by solving HJB equations on adaptive sparse grids
Wegelerstraße 6 53115 Bonn Germany phone +49 228 73-3427 fax +49 228 73-7527 www.ins.uni-bonn.de Jochen Garcke and Axel Kröner Suboptimal feedback control of PDEs by solving HJB equations on adaptive sparse
More informationConstrained Optimal Control I
Optimal Control, Guidance and Estimation Lecture 34 Constrained Optimal Control I Pro. Radhakant Padhi Dept. o Aerospace Engineering Indian Institute o Science - Bangalore opics Motivation Brie Summary
More informationNumerical Methods for Constrained Optimal Control Problems
Numerical Methods for Constrained Optimal Control Problems Hartono Hartono A Thesis submitted for the degree of Doctor of Philosophy School of Mathematics and Statistics University of Western Australia
More informationSome notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation
Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation See your notes. 1. Lagrange Interpolation (8.2) 1 2. Newton Interpolation (8.3) different form of the same polynomial as Lagrange
More informationMATH4406 (Control Theory) Unit 1: Introduction to Control Prepared by Yoni Nazarathy, August 11, 2014
MATH4406 (Control Theory) Unit 1: Introduction to Control Prepared by Yoni Nazarathy, August 11, 2014 Unit Outline The many faces of Control Theory A bit of jargon, history and applications Focus on inherently
More informationMinimum Time Control of A Second-Order System
49th IEEE Conference on Decision and Control December 5-7, 2 Hilton Atlanta Hotel, Atlanta, GA, USA Minimum Time Control of A Second-Order System Zhaolong Shen and Sean B. Andersson Department of Mechanical
More information