Constrained Optimal Control. Constrained Optimal Control II

Similar documents
Constrained Optimal Control I

Linear Quadratic Regulator (LQR) I

Linear Quadratic Regulator (LQR) Design II

Linear Quadratic Regulator (LQR) II

Linear Quadratic Regulator (LQR) Design I

Gain Scheduling and Dynamic Inversion

Chapter 5. Pontryagin s Minimum Principle (Constrained OCP)

Using Lyapunov Theory I

Pontryagin s Minimum Principle 1

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Classical Numerical Methods to Solve Optimal Control Problems

Dynamic Inversion Design II

Optimal Control Design

Principles of Optimal Control Spring 2008

Lecture 5 Classical Control Overview III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Optimal Control, Guidance and Estimation. Lecture 16. Overview of Flight Dynamics II. Prof. Radhakant Padhi. Prof. Radhakant Padhi

Lecture 4 Classical Control Overview II. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Lecture 6 Classical Control Overview IV. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Minimum Fuel Optimal Control Example For A Scalar System

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

A Gauss Lobatto quadrature method for solving optimal control problems

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

State Space Representation

Homework Solution # 3

Lecture 18 : State Space Design

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Deterministic Dynamic Programming

Control, Stabilization and Numerics for Partial Differential Equations

Lecture 2 : Mathematical Review

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Principles of Optimal Control Spring 2008

Intro. Computer Control Systems: F8

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

EE 380. Linear Control Systems. Lecture 10

An Introduction to Optimal Control Applied to Disease Models

Topic # Feedback Control Systems

Speed Profile Optimization for Optimal Path Tracking

Energy-based Swing-up of the Acrobot and Time-optimal Motion

Pontryagin s maximum principle

Minimum Time Control of A Second-Order System

Minimum Time Control of A Second-Order System

Linear State Feedback Controller Design

Minimum Time Ascent Phase Trajectory Optimization using Steepest Descent Method

Optimal Control, Guidance and Estimation. Lecture 17. Overview of Flight Dynamics III. Prof. Radhakant Padhi. Prof.

CONTROLLABILITY AND OBSERVABILITY OF 2-D SYSTEMS. Klamka J. Institute of Automatic Control, Technical University, Gliwice, Poland

Experiments in Control of Rotational Mechanics

Time-Optimal Output Transition for Minimum Phase Systems

Digital Control Engineering Analysis and Design

Selamat Kembali Ke Parit Raja.. BDA30703 Kejuruteraan Kawalan Seksyen S5

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Regional Solution of Constrained LQ Optimal Control

Topic # Feedback Control Systems

Minimization with Equality Constraints

MODERN CONTROL DESIGN

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Solution of System of Linear Equations & Eigen Values and Eigen Vectors

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

MATH 312 Section 8.3: Non-homogeneous Systems

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Zeros and zero dynamics

16.410/413 Principles of Autonomy and Decision Making

Denis ARZELIER arzelier

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

CONTROLLABILITY OF QUANTUM SYSTEMS. Sonia G. Schirmer

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m

SUCCESSIVE POLE SHIFTING USING SAMPLED-DATA LQ REGULATORS. Sigeru Omatu

Nonlinear Optimal Tracking Using Finite-Horizon State Dependent Riccati Equation (SDRE)

On the minimum of certain functional related to the Schrödinger equation

ESC794: Special Topics: Model Predictive Control

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

EE C128 / ME C134 Feedback Control Systems

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Introduction to centralized control

EE221A Linear System Theory Final Exam

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Discrete abstractions of hybrid systems for verification

Static and Dynamic Optimization (42111)

Peter C. Müller. Introduction. - and the x 2 - subsystems are called the slow and the fast subsystem, correspondingly, cf. (Dai, 1989).

Solution of Linear Equations

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

4F3 - Predictive Control

Dissipativity. Outline. Motivation. Dissipative Systems. M. Sami Fadali EBME Dept., UNR

Suboptimal Open-loop Control Using POD. Stefan Volkwein

The Euler Method for Linear Control Systems Revisited

Feedback stabilisation with positive control of dissipative compartmental systems

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

Lecture 7 and 8. Fall EE 105, Feedback Control Systems (Prof. Khan) September 30 and October 05, 2015

Passive control. Carles Batlle. II EURON/GEOPLEX Summer School on Modeling and Control of Complex Dynamical Systems Bertinoro, Italy, July

Optimal Control Theory - Module 3 - Maximum Principle

Control of industrial robots. Centralized control

Problem Set 5 Solutions 1

EE363 homework 2 solutions

Assume D diagonalizable

Solving optimal control problems with MATLAB Indirect methods

Suppose that we have a specific single stage dynamic system governed by the following equation:

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

Transcription:

Optimal Control, Guidance and Estimation Lecture 35 Constrained Optimal Control II Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore opics: Constrained Optimal Control Motivation Pontryagin Minimum Principle ime Optimal Control of LI Systems ime Optimal Control of Double-Integral System Fuel Optimal Control Energy Optimal Control State Constrained Optimal Control

Summary of Pontryagin Minimum Principle Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Obective o find an "admissible" time history of control variable U t, t t, t f, + ( or, component wise, ) where U ( t) U U u ( t) U,which: ( ) 1) Causes the system governed by Xɺ = f t, X, U to follow an admissible traectory ) Optimizes (minimizes/maximizes) a "meaningful" performance index t f = ϕ ( f, f ) + (,, ) J t X L t X U dt t 3) Forces the system to satisfy "proper boundary conditions". ( ) 4

Solution Procedure of a given Problem Hamiltonian : Necessary Conditions : (,, λ) = (, ) + λ (, ) H X U L X U f X U H (i) State Equation: Xɺ = = f ( t, X, U ) λ H (ii) Costate Equation: ɺ λ = X (ii i) Optimal Control Equation: Minimize H with repect to U ( t) U ( ) ( λ) H ( X U λ ) i.e. H X, U,,, (iv) Boundary conditions: X ϕ = Specified, λ f = X f 5 ime Optimal Control of Control Constrained LI Systems Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

ime Optimal Control of LI Systems Problem : Minimize the time taken for an LI system to go from an arbitrary initial state to the desired final state. Note : By considering the desired final state as the origin of state space, it becomes time-optimal "regulator problem". System Dynamics (LI system) : Xɺ = AX + BU n m where, X R, U R are constant matrices 7 ime Optimal Control of LI Systems he control magnitude U satisfies U U U U U + or, component wise, u U, where = 1,,, m + where, U is the lower bound and U is the upper bound. Without loss of generality, it can be assumed that 1 U 1 or, component wise, u 1, where = 1,,, m Assumption : he system is state controllable, i.e. n 1 G B AB A B A B is of rank n. 8

ime Optimal Control of LI Systems Problem : Find optimal control which satisfies 1, takes the system from Initial state X to Final state (origin) in minimum time. Solution : t f ( ) = [,, ] t J U U u Step 1 : Selection of Performance Index ( f ) f L X U t dt = 1 dt = t t t (Note: t is fixed, but t is 'free') f t 9 ime Optimal Control of LI Systems Step : Hamiltonian (, λ, ) = 1 + λ [ + ] = 1+ [ AX ] λ + H X U AX BU U B λ Step 3 : State and Costate Equation ɺ H λ Boundary condition X = = AX + BU λ = = A f, H X X () = X ; X ( t ) = ; but t f is 'free' ɺ λ 1

ime Optimal Control of LI Systems Step 4 : Optimal Control 1 + (, λ, ) (, λ, ) H X U H X U [ AX ] λ + U 1 + [ AX ] B λ (1) If q >, then U = 1 () If q <, then U = + 1 min{ } λ + U B λ U q U q,, where, q B λ U q = U q U 1 Final Solution: { } sgn{ λ} u = sgn q = B th where B is colomn vector of B 11 ypes of ime-optimal Control ) ( ) 1 Normal ime - Optimal Control NOC System During the interval t, t f, a set of times t, t,, t t, t, 1 γ f where, γ = 1,,, = 1,,, m :, if and only if t t = γ q = B λ =, otherwise hen we have a normal time-optimal control problem. 1

Bang-Bang Control Reference: D. S. Naidu, Optimal Control Systems CRC Press, (Chap. 7) For NOC system, the Optimal control { } { } U = sgn q = sgn B λ t t, t f is a piecewise constant function of time (i.e. of Bang - Bang nature) 13 ypes of ime-optimal Control ) ( ) Singular ime - Optimal Control SOC [ ] 1 [ ] During the interval t, t f, one or more sub-intervals 1, : q = t, : Singularity Intervals hen we have a singular time-optimal control problem. Note : During singularity intervals, the time-optimal control is not defined Reference: D. S. Naidu, Optimal Control Systems CRC Press, (Chap. 7) 14

Condition for NOC System ( ) Assume λ = λ. hen A t { } { λ} { λ} A t u = { q } = { B e λ} U = sgn q = sgn B = sgn B e Component wise, sgn sgn [ ] Hence, if q = t 1,, hen all derivatives of q =, i.e. 15 Condition for NOC System q qɺ qɺɺ A t = B e λ = A t = B A e λ = A t B A e = λ = ( 1 ) ( 1) n n A t qɺ = B A e λ = Combining, one can write: G e A t λ = n 1 where, G B AB A B A B 16

Condition for NOC System Combining the results for = 1,, m, one can write: where A t G e λ = n 1 G B AB A B A B A t However, e and λ. Hence, for SOC problems, G must be singular. In other words, for NOC problems, G must be non-singular; i.e. the system should be completely state controllable. 17 Solution of OC System: Some Observations i Singular interval cannot exist for a completely controllable system i he necessary and sufficiency condition for NOC is that the system is completely controllable. Conversely, he necessary and sufficiency condition for SOC is that the system is uncontrollable. 18

Uniqueness and Number of Switches Uniqueness of Optiomal Control If the ime-optimal system is normal then the Optimal control solution is unique. Number of Switchings If original system is normal, and if all n eigenvalues are real hen U can switch (from 1 to + 1 or from + 1 to 1) at most ( n ) 1 times. 19 Structure of ime-optimal Control System Dr. Radhakant Padhi Associate Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Solution of OC System Problem : System Dynamics Xɺ = AX + BU subect to u 1, = 1,,, m Obective: X with minimum time under constraint Solution : Optimal control u { B λ} = sgn = sgn A t { B e λ} 1 Open-loop Structure of OC System Adopt an iterative procedure Assume Compute λ( t) Evaluate λ u Solve for system traectory X ( t) Monitor X ( t) and see if a t : X ( t ) =. hen the control is ime-optimal control. If not, then change λ and repeat the procedure. f f

Structure of OC System Reference: D. S. Naidu, Optimal Control Systems, CRC Press, (Chap. 7) 3 Structure of OC System Closed loop Structure : { h X } Optimal control law U = sgn ( ) ( ) However, an analytical/computational algorithm h( X ) = B λ X ( t) needs to be developed (demonstrated through a bench-mark example). 4

ime-optimal Control for Double Integral System Dr. Radhakant Padhi Associate Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore OC of a Double Integral system System Dynamics : mɺɺ y = f State variables : 1 x = y; x = yɺ 1 Double integral system described by ( ) xɺ = u where, u f / m u is constarined as u 1 t t, t f xɺ = x 6

OC of Double Integral System: Problem Statement Given Suect to Double integral system u ( ) x ( ) 1 1 t t, t f 1 Find an admissible control such that x [ ] x xɺ = x = u in minimum time. Assumption : Normal OC; i.e. No Singular control. 7 OC of Double Integral System Step 1 : Performance Index t f J = 1dt = t t t f Step : Hamiltonian ( ) H X, λ, u = 1+ λ x + λ u 1 Step 3 : Minimization of Hamiltonian ( λ ) H ( X u λ ) According to PMP, H X, u,,, λ u Optimal control λ u u = sgn { λ } 8

OC of a Double Integral System Step 4 : Costate Solution ɺ H λ1 = = x λ 1 ( t ) = λ ( ) 1 1 ɺ H λ = = λ = λ 1 1 x ( ) ( t) = ( ) t + ( ) λ λ λ 1 his results in four possibilities, depending on ( ) 1 ( ) λ ( ) λ ( ) the values of λ and λ. (assuming, ) 1 9 OC of a Double Integral System Step 5 : ime - Optimal Control Sequence Question : How to find h( X ), including the time of switching? 3

OC of a Double Integral System Step 6 : State raectories From state equation, xɺ = u and xɺ = x. Hence, ( ) 1 ( ) = + where, = = ± 1 x t x Ut U u 1 x1( t) = x1 ( ) + x ( ) t + Ut o Eliminate t, we observe that Hence, one can write: ( ( )) t = x x / U 1 1 x = x ( ) Ux ( ) + Ux 1 1 Note: U = ± 1 = 1/ U 31 OC of a Double Integral System Hence, finally ( ) ( ) t = x x if U = + 1, we have 1 1 1 x = x ( ) x ( ) + x = C + x 1 1 1 t = x x if U = 1, we have 1 1 1 x1 = x1 ( ) + x ( ) x = C x where C, C are constants 1 ( x x ) Hence, it results in a family of parabolas in phase plane, 1 3

OC of a Double Integral System Reference: D. S. Naidu, Optimal Control Systems CRC Press, (Chap. 7) Arrow marks indicate evolution of traectories with increase of time. 33 OC of a Double Integral System ( ) ( ) At t = t, x t = x t =. f 1 f f Hence, from the phase-plane equation, 1 = x1 ( ) Ux ( ) + 1 x1 ( ) = Ux ( ) Collection of all such points define the "switching curve". 34

OC of a Double Integral System Step 7 : Switching Curve ( x x ) ( ) 1 ( ) wo curves γ and γ transfer any initial state to origin. Definitions: + he locus γ of all initial points,, by U = + 1 + 1 γ + = ( x1, x ) : x1 = x, x he locus γ of all initial points x. x, by U = 1 1 γ = ( x1, x ) : x1 = x, x ( ) ( ) 1 35 OC of a Double Integral System Step 7 : Switching Curve Complete switch curve γ as 1 γ = ( x1, x ) : x1 = x x = γ + γ Reference: D. S. Naidu, Optimal Control Systems CRC Press, (Chap. 7) 36

OC of a Double Integral System Step 8 : Phase Plane Regions R R + ( u = + 1) is the region to the left of γ, 1 R+ = ( x, x ) : x < x x 1 1 ( u = 1) is the region to the right of γ, 1 R = ( x, x ) : x > x x 1 1 37 OC of a Double Integral System Control Strategy If the system is initially on i γ + then apply u = + 1 i γ then apply u = 1 { } { } i R+ then apply u = + 1, 1 i R then apply u = 1, + 1 Reference: D. S. Naidu, Optimal Control Systems, CRC Press, (Chap. 7) 38

OC of a Double Integral System Step 9 : Control Law if z, u 1 if z, u 1 ( x1 x ) ( x x ) + 1, γ + R+ Note: u = 1 1, γ R 1 o implement it, define z x + x x > = < = + 1. Next, if z =, then if x >, u = 1 and if x <, u = + 1. Note : he closed loop (feed back) control law is NONLINEAR even though the sytem is linear! 39 OC of a Double Integral System Step 1 : Minimum ime 1 x + 4x1 + x if ( x1, x ) R or x1 > x x 1 t = x + 4x + x if ( x, x ) R + or x < x x 1 x if ( x1, x ) γ or x1 = x x f 1 1 1 4

Close-loop Implementation of ime Optimal Control Law 41 hanks for the Attention.!! 4