Objective. 1 Specification/modeling of the controlled system. 2 Specification of a performance criterion

Similar documents
OPTIMAL CONTROL CHAPTER INTRODUCTION

Chapter 2 Optimal Control Problem

Rigid body simulation. Once we consider an object with spatial extent, particle system simulation is no longer sufficient

Outline. Basic Concepts in Optimization Part I. Illustration of a (Strict) Local Minimum, x. Local Optima. Neighborhood.

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University

An introduction to Mathematical Theory of Control

Deterministic Dynamic Programming

Principles of Optimal Control Spring 2008

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

Nonlinear and robust MPC with applications in robotics

Robotics. Dynamics. Marc Toussaint U Stuttgart

Lecture Note 12: Dynamics of Open Chains: Lagrangian Formulation

Solution of Stochastic Optimal Control Problems and Financial Applications

Cyber-Physical Systems Modeling and Simulation of Continuous Systems

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Chapter 1 Fundamental Concepts

Chapter 1 Fundamental Concepts

CONTROL OF THE NONHOLONOMIC INTEGRATOR

Tonelli Full-Regularity in the Calculus of Variations and Optimal Control

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

Modeling and Analysis of Dynamic Systems

Optimal Control Theory SF 2852

Calculus of Variations Summer Term 2014

CIS 4930/6930: Principles of Cyber-Physical Systems

S12 PHY321: Practice Final

Analog Signals and Systems and their properties

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Robotics. Dynamics. University of Stuttgart Winter 2018/19

Introduction to Controls

In the presence of viscous damping, a more generalized form of the Lagrange s equation of motion can be written as

Articulated body dynamics

Overview of the Seminar Topic

UNIVERSITY OF MANITOBA

Physics 141, Lecture 7. Outline. Course Information. Course information: Homework set # 3 Exam # 1. Quiz. Continuation of the discussion of Chapter 4.

Systems of Ordinary Differential Equations

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Getting Some Big Air

On feedback stabilizability of time-delay systems in Banach spaces

Dynamic Blocking Problems for a Model of Fire Propagation

2 Sequences, Continuity, and Limits

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Most Continuous Functions are Nowhere Differentiable

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Problem 1 Cost of an Infinite Horizon LQR

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Lecture 38: Equations of Rigid-Body Motion

FINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS. BORIS MORDUKHOVICH Wayne State University, USA

Chapter 12. Recall that when a spring is stretched a distance x, it will pull back with a force given by: F = -kx

Variational Methods and Optimal Control

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri

Fundamentals Physics

MCE 366 System Dynamics, Spring Problem Set 2. Solutions to Set 2

Oscillatory Motion SHM

Reading group: Calculus of Variations and Optimal Control Theory by Daniel Liberzon

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Robotics. Control Theory. Marc Toussaint U Stuttgart

Static and Dynamic Optimization (42111)

GENERALIZED second-order cone complementarity

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

A Gauss Lobatto quadrature method for solving optimal control problems

Motion in Space. MATH 311, Calculus III. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Motion in Space

Unforced Oscillations

Chapter 14. PowerPoint Lectures for University Physics, Thirteenth Edition Hugh D. Young and Roger A. Freedman. Lectures by Wayne Anderson

You should be able to...

5 Handling Constraints

2.004 Dynamics and Control II Spring 2008

Chapter 13: Oscillatory Motions

Mathematical Models. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Spring Department of Mathematics

Mathematical Models. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Fall Department of Mathematics

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Chapter 3 Numerical Methods

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects

Introduction to Modern Control MT 2016

CMPT 889: Lecture 2 Sinusoids, Complex Exponentials, Spectrum Representation

EXAMPLE: MODELING THE PT326 PROCESS TRAINER

Flipping Physics Lecture Notes: Demonstrating Rotational Inertia (or Moment of Inertia)

Modeling and Experimentation: Compound Pendulum

Swing-Up Problem of an Inverted Pendulum Energy Space Approach

Pontryagin s Minimum Principle 1

Numerical Optimal Control Overview. Moritz Diehl

OPTIMAL CONTROL THEORY: APPLICATIONS TO MANAGEMENT SCIENCE AND ECONOMICS

2 Statement of the problem and assumptions

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

Linear Systems Theory

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

Robotics, Geometry and Control - A Preview

EE16B Designing Information Devices and Systems II

Introduction to Robotics

MATH 4211/6211 Optimization Constrained Optimization

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012

Lecture 38: Equations of Rigid-Body Motion

Chapter 14. PowerPoint Lectures for University Physics, Thirteenth Edition Hugh D. Young and Roger A. Freedman. Lectures by Wayne Anderson

Signals and Spectra (1A) Young Won Lim 11/26/12

Math 4263 Homework Set 1

Pontryagin s maximum principle

Transcription:

Optimal Control Problem Formulation Optimal Control Lectures 17-18: Problem Formulation Benoît Chachuat <benoit@mcmaster.ca> Department of Chemical Engineering Spring 2009 Objective Determine the control signals that will cause a controlled system to satisfy the physical constraints and, at the same time, minimize (or maximize) some performance criterion Main Steps of Problem Formulation: 1 Specification/modeling of the controlled system Admissible controls Response and dynamical model 2 Specification of a performance criterion Lagrange, Mayer and Bolza forms 3 Specification of physical constraints to be satisfied Path and terminal constraints Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 1 / 15 Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 2 / 15 Controlled System A controlled system is characterized by: State variables x = [x 1 x nx ] T, n x 1, describing its internal behavior phase space e.g., coordinates, velocities, concentrations, flow rates, etc. Control variables u = [u 1 u nu ] T, n u 1, describing the controller positions control space e.g., force, voltage, temperature, etc. Model Development Objective: Develop the simplest model that accurately predicts the system behavior to all foreseeable control decisions Controlled System System Response: A solution x(t;,x 0,u( )) to the differential equations is called a response to the control u( ) for the initial conditions x 0 at Important Questions: Does a response exist for a given control and initial conditions? Is this response unique? Typically, in the form of Ordinary Differential Equations (ODEs), ẋ(t) = f(t,x(t),u(t)), with specified initial conditions, x( ) = x 0 Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 3 / 15 Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 4 / 15

Admissible Controls Restriction to a certain control region, u(t) U IR nu E.g., U = {u IR n u : u j 1, j}, U = {u IR nu : (u) = 0} Which class for the time-varying controls? Continuous controls: u C[t0, t f ] nu may not be a large enough class More generally, piecewise continuous control: u Ĉ[t0, t f ] nu Admissible Controls Restriction to a certain control region, u(t) U IR nu E.g., U = {u IR n u : u j 1, j}, U = {u IR nu : (u) = 0} Which class for the time-varying controls? Continuous controls: u C[t0, t f ] nu may not be a large enough class More generally, piecewise continuous control: u Ĉ[t0, t f ] nu Finite partition: x(t),u(t) = θ 0 < < θ N < θ N+1 = t f and u [θk,θ k+1 ] C[θ k,θ k+1 ] nu θ 1 θ k θ N t f for each k = 0,...,N Allow instantaneous control jumps Underlying assumption: inertia-less controllers x( ) u( ) t Class of Admissible Controls U[, t f ] A piecewise continuous control u( ), defined on some time interval t t f, with range in the control region U, is said to be an admissible control u(t) U, t [,t f ], Every admissible control is bounded Do you see why? Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 5 / 15 Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 5 / 15 Performance Criterion A functional used for quantitative evaluation of a system s performance Can depend on both the control and state variables Can depend on the initial and/or terminal times too (if not fixed) Functional Forms Lagrange form: J(u) = l(t,x(t),u(t)) dt Mayer form: J(u) = ϕ(,x( ),t f,x(t f )) Bolza form: J(u) = ϕ(,x( ),t f,x(t f )) + l(t,x(t),u(t)) dt Lagrange, Mayer and Bolza functional forms are equivalent! Physical Constraints Functional equalities/inequalities restricting the range of values that can be assumed by control and/or state variables Types of Constraints Point constraints: ψ i ( t,x( t)) 0, ψ e ( t,x( t)) = 0, t [,t f ] E.g., terminal state constraint xk (t f ) x U,f k Isoperimetric (integral) constraints: κ i (t,x(t),u(t)) dt 0, κ e (t,x(t),u(t)) dt = 0 Can always be reformulated as a point constraints! (easier to handle) Path constraints: κ i (t,x(t),u(t)) 0, κ e (t,x(t),u(t)) = 0, t [,t f ] E.g., input path constraints u L u(t) u U, t [, t f ]; (pure) state path constraints x k (t) x U k, t [, t f ] Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 6 / 15 Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 7 / 15

A Quite General Optimal Control Formulation Open-Loop vs. Closed Loop Optimal Control Optimal Control Problem Determine u Ĉ 1 [,t f ] nu that Open-Loop Optimal Control An optimal control law of the form Closed-Loop Optimal Control A feedback control law of the form minimize: J(u) = (x(t f )) + l(t,x(t),u(t)) dt subject to: ẋ(t) = f(t,x(t),u(t)); x( ) = x 0 ψ i j(x(t f )) 0, ψ e j (x(t f)) = 0, κ i j(t,x(t),u(t)) 0, κ e j (t,x(t),u(t)) = 0, j u(t) [u L,u U ] j = 1,...,n i ψ j = 1,...,n e ψ j = 1,...,n i κ = 1,...,ne κ u (t) = ω(t;,x 0 ), t [,t f ], determined for a particular initial state value x( ) = x 0 Controller Process u (t) = ω(t,x(t)), t [,t f ], i.e., calculating the optimal control actions u (t) corresponding to the current state x(t) u (t) x(t) u (t) x(t) opens at Controller Process Pros and Cons? Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 8 / 15 Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 9 / 15 Class Exercise: Car Control Problem Class Exercise: Double Pendulum L 2 = 1 m θ 2 p( ) = p 0 p(t f ) = p f p m 2 = 1 kg, I 2 Optimal Control Formulation: Drive a car, initially parked at position p 0, to its final destination p f (parking), in minimum time State, p(t),ṗ(t): position and speed Control, u(t): force due to acceleration ( 0) or deceleration ( 0) L1 = 1 m m1 = 1 kg, I1 τ(t) = H 11 (t) θ 1 (t) + H 12 (t) θ 2 (t) 2h θ 1 (t) θ 2 (t) h θ 2 2 (t) 0 = H 22 (t) θ 2 (t) + H 12 (t) θ 1 (t) + h θ 2 1 (t) with: H 11 (t) = I 1 + I 2 + 1 4 m 1L 2 1 + m [ 2 L 2 1 + 1 4 L2 2 + 2L 1L 2 cos θ 2 (t) ] H 12 (t) = I 2 + 1 4 m 2L 2 2 + 1 2 m 2L 1 L 2 cos θ 2 (t) H 22 (t) = I 2 + 1 4 m 2L 2 2 h = 1 2 L 1L 2 sinθ 2 (t) θ 1 I 1 = 1 3 m 1L 2 1 τ I 2 = 1 3 m 2L 2 2 Control variable: torque, τ(t) Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 10 / 15 Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 11 / 15

Class Exercise: Double Pendulum θ 2 L 2 = 1 m m 2 = 1 kg, I 2 Feasibility and (Global) Optimality Feasible Control A control u U[,t f ] is said to be feasible if: 1 x(t;x 0,u( )) is defined for each t [,t f ] 2 u and x satisfy all of the physical constraints The set of feasible controls, Ω[,t f ], is defined as: L1 = 1 m m1 = 1 kg, I1 Optimal control problem: Bring the double pendulum to stationary, upper vertical position in minimum time, starting from stationary, lower vertical position Ω[,t f ] = {u U[,t f ] : u is feasible} (Globally) Optimal Control (Minimize Case) A control u U[,t f ] is said to be optimal if: θ 1 J(u ) J(u), u Ω[,t f ] τ Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 11 / 15 This optimality condition is global in nature, i.e. it does not require consideration of a norm Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 12 / 15 Class Exercise: Double Pendulum (cont d) A feasible solution: Class Exercise: Double Pendulum (cont d) The optimal solution: torque τ(t) (N m) 10 5 0-5 feasible solution torque τ(t) (N m) 10 5 0-5 feasible solution minimum-time solution -10-10 0 1 2 3 4 5 6 7 8 time t (s) 0 1 2 3 4 5 6 7 8 time t (s) Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 13 / 15 Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 13 / 15

Local Optimality Locally Optimal Control (Minimize Case) A control u U[,t f ] is said to be a local optimum if: δ > 0 such that: J(u ) J(u), u B δ (u ) Ω[,t f ] Specification of a Norm/Distance on U[,t f ]: Weak Minima: ( B 1, δ (u ) = u U[,t f ] : sup x(t) x (t) + u(t) u (t) < δ t S N k=0 (θ k,θ k+1 ) Similar considerations as with Euler s equation leads to the Euler-Lagrange equations Strong Minima: B δ (u ) = ( u U[,t f ] : sup x(t) x (t) < δ t S N k=0 (θ k,θ k+1 ) Similar considerations as with Weierstrass condition leads to the Pontryagin Maximum Principle Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 14 / 15 ) ) Existence of an Optimal Solution Recalls: Weierstrass Theorem Let P IR n be a nonempty, compact set, and let : P IR be continuous on P. Then, the problem min{(p) : p P} attains its minimum, that is, there exists a minimizing solution to this problem. Why closedness of P? Why continuity of? Why boundedness of P? a (c) b a c b a (a) (b) (c) + Benoît Chachuat (McMaster University) Introduction & Formulation Optimal Control 15 / 15