Optimal Control Design

Similar documents
Classical Numerical Methods to Solve Optimal Control Problems

Constrained Optimal Control I

Constrained Optimal Control. Constrained Optimal Control II

Gain Scheduling and Dynamic Inversion

Linear Quadratic Regulator (LQR) I

Dynamic Inversion Design II

Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra

Linear Quadratic Regulator (LQR) Design II

Linear Quadratic Regulator (LQR) Design I

Minimum Time Ascent Phase Trajectory Optimization using Steepest Descent Method

ECE: Special Topics in Systems and Control. Optimal Control: Theory and Applications

Lecture 6 Classical Control Overview IV. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Numerical Optimal Control Overview. Moritz Diehl

Using Lyapunov Theory I

Department of Mathematics Faculty of Natural Science, Jamia Millia Islamia, New Delhi-25

Optimal Control Theory

Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field

Output Regulation of the Arneodo Chaotic System

Welcome to IIT Guwahati

Chapter 5. Pontryagin s Minimum Principle (Constrained OCP)

Introduction to Reachability Somil Bansal Hybrid Systems Lab, UC Berkeley

UAV Navigation: Airborne Inertial SLAM

Decentralized Control of Nonlinear Multi-Agent Systems Using Single Network Adaptive Critics

OPTIMAL CONTROL AND ESTIMATION

Book review for Stability and Control of Dynamical Systems with Applications: A tribute to Anthony M. Michel

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Linear Quadratic Regulator (LQR) II

What is flight dynamics? AE540: Flight Dynamics and Control I. What is flight control? Is the study of aircraft motion and its characteristics.

Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

ENGINEERING MATHEMATICS I. CODE: 10 MAT 11 IA Marks: 25 Hrs/Week: 04 Exam Hrs: 03 PART-A

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

Session Title Chair & Co Chairs Paper Title of Best presentation

Stability Analysis of Optimal Adaptive Control under Value Iteration using a Stabilizing Initial Policy

Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle

INDIAN SPACE RESEARCH ORGANIZATION (ISRO), AHMEDABAD

EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games

Introduction MEAM 535. What is MEAM 535? Audience. Advanced topics in dynamics

TWO- PHASE APPROACH TO DESIGN ROBUST CONTROLLER FOR UNCERTAIN INTERVAL SYSTEM USING GENETIC ALGORITHM

Static and Dynamic Optimization (42111)

Solution of Stochastic Optimal Control Problems and Financial Applications

Energy-Insensitive Guidance of Solid Motor Propelled Long Range Flight Vehicles Using MPSP and Dynamic Inversion

International Symposium on Recent Advances in Nanomaterials & Workshop on

Riccati difference equations to non linear extended Kalman filter constraints

Optimal Control. Lecture 3. Optimal Control of Discrete Time Dynamical Systems. John T. Wen. January 22, 2004

Chapter 2 Optimal Control Problem

Output Regulation of the Tigan System

OUTPUT REGULATION OF RÖSSLER PROTOTYPE-4 CHAOTIC SYSTEM BY STATE FEEDBACK CONTROL

An Optimal Tracking Approach to Formation Control of Nonlinear Multi-Agent Systems

Rakhim Aitbayev, Ph.D.

Pontryagin s maximum principle

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

A Simple Design Approach In Yaw Plane For Two Loop Lateral Autopilots

Overview of the Seminar Topic

CURRICULUM VITAE. Dept. of Mathematics, Sundergarh,Rourkela (Odisha) INDIA. Residential Address : C/O -BiswanathKar, At- Kodagambhir,

Preface Acknowledgments

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS

Optimal Control of Weakly Coupled Systems and Applications

REVIEW. Hamilton s principle. based on FW-18. Variational statement of mechanics: (for conservative forces) action Equivalent to Newton s laws!

Photometric Stereo: Three recent contributions. Dipartimento di Matematica, La Sapienza

Analytical Mechanics. of Space Systems. tfa AA. Hanspeter Schaub. College Station, Texas. University of Colorado Boulder, Colorado.

Pierre Bigot 2 and Luiz C. G. de Souza 3

Optimization Of Cruise Tourist Orbit For Multiple Targets On GEO

An Overview on Behavioural Theory to Systems and Control. Md. Haider Ali Biswas PDEEC, FEUP

Water Resources Systems: Modeling Techniques and Analysis

Application of singular perturbation theory in modeling and control of flexible robot arm

Analysis of optimal strategies for soft landing on the Moon from lunar parking orbits

Profile... Dr. K. Venkata Rao Associate Professor Mechanical Engineering Department

(Autonomous/ Affiliated to Anna University, Chennai) COIMBATORE DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING

Shock-Boundary Layer Interaction (SBLI) CFD research and applications. IIT Bombay, 15th May 2012

Math 411 Preliminaries

OUTPUT REGULATION OF THE SIMPLIFIED LORENZ CHAOTIC SYSTEM

Neural Dynamic Optimization for Control Systems Part II: Theory

WEIGHTING MATRICES DETERMINATION USING POLE PLACEMENT FOR TRACKING MANEUVERS

Lecture 4 Classical Control Overview II. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Policy Gradient Reinforcement Learning for Robotics

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Linear Feedback Control Using Quasi Velocities

The Implicit Function Theorem with Applications in Dynamics and Control

Maximum Process Problems in Optimal Control Theory

HIGH-ORDER STATE FEEDBACK GAIN SENSITIVITY CALCULATIONS USING COMPUTATIONAL DIFFERENTIATION

Goddard s Problem Steven Finch

Black Boxes & White Noise

Papers Published. Achievements from the Department of M.Tech(Aeronautical Engg.) Dr SK MAHARANA: Personal publications

Longitudinal Automatic landing System - Design for CHARLIE Aircraft by Root-Locus

ADAPTIVE FILTER THEORY

Generalization to inequality constrained problem. Maximize

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Sufficient Conditions for Finite-variable Constrained Minimization

COE CST Fifth Annual Technical Meeting. Task 187: Space Situational Awareness. PI: Dan Scheeres Student: In-Kwan Park University of Colorado

ECE 516: System Control Engineering

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Computational Modeling for Physical Sciences

MODEL-BASED REINFORCEMENT LEARNING FOR ONLINE APPROXIMATE OPTIMAL CONTROL

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

H-infinity Model Reference Controller Design for Magnetic Levitation System

Transcription:

Optimal Control Design Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Acknowledgement: Indian Institute of Science Founded in 1909 more than 100 years old Founded by J. N. Tata (in consultation with Swami Vivekananda) land was donated by Mysore king. Deemed University in 1958 More than 40 departments Ranked No.1 in India for higher education Only institute in India among best 100 in global ranking For further information, please visit www.iisc.ernet.in 20 September 2016 Prof. Radhakant Padhi, IISc-Bangalore 2 1

Collaboration & Research Funding Defense R&D Organisation (DRDO) Missile Complex (ASL, RCI, DRDL, ANURAG) ARDE CAIR Indian Space Research Organisation (ISRO) VSSC ISAC Air Force Research Lab (AFRL), USA Private Aerospace Companies Coral Digital Technologies Team Indus (Axiom Research Lab) 20 September 2016 Prof. Radhakant Padhi, IISc-Bangalore 3 Research Areas in ICGEL: Guidance and Control of Missiles MPSP and it variants are used to develop optimal guidance algorithm for better performance. Examples: Impact Angle Constrained Guidance of Tactical Missiles Integrated Guidance and Control for Missiles for Ballistic Missile Defence Integrated Control Guidance and Estimation Lab (ICGEL) & Aerospace Systems Lab (ASL) Contact: Prof. Radhakant Padhi E. mail: padhi@aero.iisc.ernet.in Nonlinear, Optimal & Adaptive Control Dynamic Inversion & Neuro-Adaptive Designs Single Network Adaptive Critic (SNAC) Guidance and control of UAVs Guidance and Control for automatic landing. Stereo Vision based reactive collision avoidance using ultra low-cost cameras Nonlinear differential geometric guidance for collision avoidance Nonlinear & Neuro- Adaptive Control of High-Perf. Aircrafts A new robust nonlinear approach is developed for better control of high performance (large L/D) aircrafts, which are unstable in nature. Dept. of Aerospace Engineering Indian Institute of Science, Bangalore Model Predictive Static Programming (MPSP) Online Modified (OM) Design for Enhanced Robustness State Estimation for Feedback Guidance & Control Formation Flying and Attitude Control of Satellites Robust Formation flying of satellites using online modified real-time optimal control Robust large attitude maneuvers of satellites in presence of significant modelling errors Feedback Control for Customized Automatic Drug Delivery Drug is delivered as per patient s condition (not in open loop) - Fast recovery & Reduced side effects Demonstrated for blood cancer, diabetes regulation & Milk-fever of cows Optimal Process Control Current Team (2016) 13 Ph.D. Students, 1 Master Student 2 Project Associates, 2 Project Assistants (many more in the past) 2

Acknowledgement: Graduated Students & Other Co-workers Mangal Kothari (Faculty in IIT-Kanpur) Arnab Maity (Faculty in IIT-Bombay) Sk. Faruque Ali (Faculty in IIT-Madras) Gurunath Gurala (Faculty in IISc-Bangalore) Harshal Oza (Faculty in Ahmedabad Univ., Ahmedabad) Prasiddha Nath Dwivedi (Scientist in DRDO, Hyderabad) Prem Kumar (Scientist in DRDO, Hyderabad) Girish Joshi (Former scientist in ISRO, doing his Ph.D. in USA) Kapil Sachan (currently a Ph.D. student) Avijit Banerjee (currently a Ph.D. student) Omkar Halbe (Working in EADS) Charu Chawla (Working in a Pvt. Company) and many more! 20 September 2016 Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 5 Outline Lecture 1 Generic Overview of Optimal Control Theory Lecture 2 Real-time Optimal Control using MPSP Lecture 3 Solution of Challenging Practical Problems using MPSP Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 6 3

Lecture 1 An Overview of Optimal Control Design Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Why Optimal Control? Summary of Benefits A variety of difficult real-life problems can be formulated in the framework of optimal control. State and control bounds can be incorporated in the control design process explicitly. Incorporation of optimal issues lead to a variety of advantages, like minimum cost, maximum efficiency, non-conservative design etc. Trajectory planning issues can be incorporated into the guidance and control design. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 8 4

Role of Optimal Control Question: What is R(s)? How to design it?? Unfortunately, books remain completely silent on this! Optimization (Optimal Control) Optimization (Optimal Control) Mission Objectives Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 9 A Tribute to Pioneers of Optimal Control 1700s Bernoulli, Newton Euler (Student of Bernoulli) Lagrange...200 years later... 1900s Pontryagin Bellman Kalman Bernoulli Pontryagin Euler Bellman Lagrange Kalman Newton Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 10 5

An Interesting Observation Euler (1726) - Lagrange - Fourier - Dirichlet - Lipschitz - Klein [1A] Euler (1726) - Lagrange - Poisson - Dirichlet - Lipschitz - Klein [1B] Gauss (1799) - Gerling - Pluecker - Klein [2] >> Klein - Lindeman - Hilb - Baer - Liepman - Bryson - Speyer - Bala - Padhi [3] Gauss (1799) - Bessel - Scherk - Kummer - Prym - Rost - Baer - Liepman - Bryson - Speyer - Bala - Padhi [4] Prof. Radhakant Padhi, AE Dept., IISc- Bangalore 11 Optimal control formulation: Key components An optimal control formulation consists of: Performance index that needs to be optimized Appropriate boundary (initial & final) conditions Hard constraints Soft constraints Path constraints System dynamics constraint (nonlinear in general) State constraints Control constraints Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 12 6

Optimal Control Problem Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 13 Meaningful Performance Index Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 14 7

Meaningful Performance Index Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 15 Optimum of a Functional Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 16 8

Fundamental Theorem of Calculus of Variations Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 17 Fundamental Lemma Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 18 9

Optimal Control Problem Performance Index (to minimize / maximize): Path Constraint: Boundary Conditions: Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 19 Necessary Conditions of Optimality Augmented PI Hamiltonian First Variation Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 20 10

Necessary Conditions of Optimality First Variation Individual terms Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 21 Necessary Conditions of Optimality 0 Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 22 11

Necessary Conditions of Optimality First Variation Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 23 Necessary Conditions of Optimality First Variation Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 24 12

Necessary Conditions of Optimality: Summary State Equation Costate Equation Optimal Control Equation Boundary Condition Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 25 Necessary Conditions of Optimality: Some Comments State and Costate equations are dynamic equations. If one is stable, the other turns out to be unstable! Optimal control equation is a stationary equation Boundary conditions are split: it leads to Two-Point- Boundary-Value Problem (TPBVP) State equation develops forward whereas Costate equation develops backwards. It is known as Curse of Complexity in optimal control Traditionally, TPBVPs demand computationally-intensive iterative numerical procedures, which lead to open-loop control structure. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 26 13

General Boundary/Transversality Condition General condition: Special Cases: Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 27 Example 1: A Toy Problem Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore 14

Example Problem: Solution: Costate Eq. Optimal control Eq. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 29 Example Boundary Conditions Define Solution Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 30 15

Example Use the boundary condition at Use the boundary condition at Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 31 Example Four equations and four unknowns: Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 32 16

Example Solution for State and Costate Solution for Optimal Control Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 33 Example 2: Orbit Transfer Problem Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore 17

Example (Maximum Radius Orbit Transfer at a Given Time) Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 35 Example (Maximum Radius Orbit Transfer at a Given Time) Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 36 18

System Dynamics and B.C. System dynamics Boundary conditions Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 37 Performance index Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 38 19

Necessary Condition Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 39 Necessary Condition Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 40 20

A Classical Numerical Approach for Solving Optimal Control Problems: Gradient Method Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Gradient Method Assumptions: State equation satisfied Costate equation satisfied Boundary conditions satisfied Strategy: Satisfy the optimal control equation Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 42 21

Gradient Method Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 43 Gradient Method After satisfying the state & costate equations and boundary conditions, we have Select This leads to Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 44 22

Gradient Method We select This lead to Note: Eventually, Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 45 Gradient Method: Procedure Assume a control history (not a trivial task) Integrate the state equation forward Integrate the costate equation backward Update the control solution This can either be done at each step while integrating the costate equation backward or after the integration of the costate equation is complete Repeat the procedure until convergence Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 46 23

Gradient Method: Selection of Select so that it leads to a certain percentage reduction of Let the percentage be Then This leads to Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 47 Dynamic Programming and Hamilton Jacobi Jacobi Bellman (HJB) Theory Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore 24

Fundamental Philosophy Motivation / Objective To obtain a state feedback optimal control solution Fundamental Theorem Any part of an optimal trajectory is an optimal trajectory! Optimal path B C A Non-optimal path Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 49 Optimal Control Problem Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 50 25

Hamilton Jacobi Bellman (HJB) Equation Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 51 Hamilton Jacobi Bellman (HJB) Equation contd. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 52 26

Hamilton Jacobi Bellman (HJB) Equation contd. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 53 Hamilton Jacobi Bellman (HJB) Equation contd. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 54 27

Hamilton Jacobi Bellman (HJB) Equation contd. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 55 Hamilton Jacobi Bellman (HJB) Equation contd. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 56 28

Hamilton Jacobi Bellman (HJB) Equation contd. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 57 Summary of HJB Equation Define optimized cost function V as: Then V(t) must satisfy: HJB equation Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 58 29

Dynamic Programming: Some Relevant Results Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 59 Dynamic Programming: Some Relevant Results Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 60 30

Example: A Benchmark Toy Problem Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Example Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 62 31

Example Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 63 Example Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 64 32

Example-2 Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 65 Dynamic Programming: Some Important Facts Dynamic programming is a powerful technique in the sense that if the HJB equation is solved, it leads to a state feedback form of optimal control solution. HJB equation is both necessary and sufficient for the optimal cost function. At least one of the control solutions that results from the solution of the HJB equation is guaranteed to be stabilizing. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 66 33

Dynamic Programming: Some Important Facts The resulting PDE of the HJB equation is extremely difficult to solve in general. Dynamic Programming runs into a huge Computational and storage requirements for reasonably higher dimensional problems. This is a severe restriction of dynamic programming technique, which Bellman termed as curse of dimensionality. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 67 Books on Optimal Control Design R. Padhi, Applied Optimal Control, Wiley, Manuscript Under Preparation (expected in 2018). D. S. Naidu, Optimal Control Systems, CRC Press, 2002. D. E. Kirk, Optimal Control Theory: An Introduction, Prentice Hall, 1970. A. E. Bryson and Y-C Ho, Applied Optimal Control, Taylor and Francis, 1975. A. P. Sage and C. C. White III, Optimum Systems Control (2nd Ed.), Prentice Hall, 1977. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 68 34

Survey Papers on Classical Methods for Optimal Control Design H. J. Pesch (1994), A Practical Guide to the Solution of Real-Life Optimal Control Problems, Control and Cybernetics, Vol.23, No.1/2, 1994, pp.7-60. R. E. Larson (1967), A Survey of Dynamic Programming Computational Procedures, IEEE Transactions on Automatic Control, December, pp. 767-774. M. Athans (1966), The Status of Optimal Control Theory and Applications for Deterministic Systems, IEEE Trans. on Automatic Control, Vol. AC-11, July 1966, pp.580-596. Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 69 Thanks for the Attention.!! Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 70 35