Optimal Control. Lecture 3. Optimal Control of Discrete Time Dynamical Systems. John T. Wen. January 22, 2004

Size: px
Start display at page:

Download "Optimal Control. Lecture 3. Optimal Control of Discrete Time Dynamical Systems. John T. Wen. January 22, 2004"

Transcription

1 Optimal Control Lecture 3 Optimal Control of Discrete Time Dynamical Systems John T. Wen January, 004

2 Outline optimization of a general multi-stage discrete time dynamical systems special case: discrete time linear quadratic regulator (LQR) Ref: Bryson & Ho, Ch.. January, 004Copyrighted by John T. Wen Page 1

3 Last Time Static optimization with equality and inequality constraints Lagrange multiplier and Hamiltonian Penalty function method (interior and exerior) MATLAB optimization toolbox (see lec03.m for example): Unconstrained optimization: fminunc Constrained optimization (equality or inequailty): fmincon We will use these functions to solve nonlinear optimal control problems. Single stage discrete time optimal control: treat the state evolution equation as an equality constraint and apply the Lagrange multiplier and Hamiltonian approach. January, 004Copyrighted by John T. Wen Page

4 Last Time (cont.) Single stage optimal control: model: x 1 = f 0 [x 0,u 0 ] Optimization index: J = φ(x 1 ) + L 0 [x 0,u 0 ] Hamiltonian : H 0 = L 0 + λ T 1 f 0 First order condition : 0 = H0 u 0 = L0 u 0 + λ T 1 f 0 u 0 State equation : x 1 = H0 = f 0 [x 0,u 0 ] λ 1 ( ) H 0 T [ ] f 0 T [ L 0 Costate equation : λ 0 = = λ 1 + x 0 x 0 x 0 [ ] φ T Boundary condition : λ 1 =. x 1 ] T January, 004Copyrighted by John T. Wen Page 3

5 Last Time (Single Stage: LQR) x 1 = A 0 x 0 + B 0 u 0, x 0 given J = (x 1 x d ) T Q 1 (x 1 x d ) + (x 0 x d ) T Q 0 (x 0 x d ) + u T 0 R 0 u 0. January, 004Copyrighted by John T. Wen Page 4

6 Optimal control for general discrete time systems We now extend the single step optimization to a general finite-horizon optimization. Consider the discrete dynamical system over N steps: x(i + 1) = f i [x(i),u(i)]; i = 0,...,N 1 where x(i) R n and u(i) R m. Example: linear time varying system x(i + 1) = A(i)x(i) + B(i)u(i). Consider the N-step look-ahead optimal control problem: N 1 J = φ[x(n)] + i=0 L i [x(i),u(i)]. Example: J = x T (N)Q(N)x(N) + N 1 ( i=1 x T (i)q(i)x(i) + u T (i)r(i)u(i) ). There are n N (constraint) equations and m N control variables. January, 004Copyrighted by John T. Wen Page 5

7 Lagrange Multiplier Approach Apply Lagrange multiplier as in single-step case. Define Hamiltonian and augmented cost function N 1 H = J + i=0 N 1 J = H i=0 λ T (i + 1) f i [x(i),u(i)] λ T (i + 1)x(i + 1). January, 004Copyrighted by John T. Wen Page 6

8 First Order Condition for Optimality Solve for Lagrange multipliers: J x(i) = H x(i) λt (i) = 0; i = 1,...,N. This becomes: Solve for control variable: λ T (N) = φ x(n) λ T (i) = Li x(i) + λt (i + 1) f i x(i). J u(i) = H u(i) = 0 Li u(i) + λt (i + 1) f i u(i) = 0. January, 004Copyrighted by John T. Wen Page 7

9 First Order Optimality Condition Complete Solution: Given x(0), the optimal control solution must satisfy x(i + 1) = f i [x(i),u(i)] 0 = Li u(i) + λt (i + 1) f i u(i) λ(i) = λ(n) = solve u(i) in terms of x(i) and λ(i + 1) [ ] f i T [ ] L i T λ(i + 1) + x(i) [ ] φ T. x(n) x(i) Dynamical equations now involve n variables (state x and co-state λ). Half of the boundary conditions are initial conditions on x, the other half are terminal conditions on λ. This is called a two-point boundary value problem (TPBVP). January, 004Copyrighted by John T. Wen Page 8

10 Discrete Time Linear Quadratic Regulator (LQR) Consider a linear time varying dynamics with quadratic optimization index: x(i + 1) = A(i)x(i) + B(i)u(i); x(0) = x 0 J = x T N 1( (N)Q(N)x(N) + x T (i)q(i)x(i) + u T (i)r(i)u(i) ) i=0 with R(i) > 0 and Q(i) > 0 Solution: Form Hamiltonian: H = J + N 1 i=0 λt (i + 1)(A(i)x(i) + B(i)u(i)). Use H u(i) = 0 to solve for u(i): u(i) = 1 R 1 (i)b T (i)λ(i + 1) Use λ(i) = [ H x(i)] T to obtain co-state dynamics: λ(i) = A T (i)λ(i + 1) + Q(i)x(i) λ(n) = Q(N)x(N). January, 004Copyrighted by John T. Wen Page 9

11 Discrete Time LQR Substituting optimal control back into the state equation we obtain: x(i + 1) = A(i)x(i) B(i)R 1 (i)b T λ(i + 1) (i) λ(i) Put them together, we have x(i + 1) = A(i) Q(i) λ(i) = A T λ(i + 1) (i) + Q(i)x(i). B(i)R 1 (i)b T (i) A T (i) x(i) λ(i+1) with the boundary condition How do we solve this? x(0) = x 0 ; λ(n) = Q(N)x(N). January, 004Copyrighted by John T. Wen Page 10

12 Solution of Discrete Time LQR Then λ(n) = Q(N)x(N), i.e., λ(n) and x(n) are linearly related. Now we show by induction that λ(i) and x(i) are also linearly related. Supposed that λi+1 = P(i + 1)x(i + 1). Then the state equation becomes: x(i + 1) = A(i)x(i) B(i)R 1 (i)b T (i)p(i + 1)x(i + 1) = (I + B(i)R 1 (i)b T (i)p(i + 1)) 1 A(i)x(i). We now substitute this into the co-state equation: λ(i)/ = A T (i)p(i + 1)x(i + 1) + Q(i)x(i) = [ A T (i)p(i + 1)(I + B(i)R 1 (i)b T (i)p(i + 1)) 1 A(i) + Q(i) ] x(i). }{{} P(i) January, 004Copyrighted by John T. Wen Page 11

13 Solution of Discrete Time LQR (Cont.) Since λ(n) and x(n) are linear related, by induction, λ(i) and x(i) are linearly related for i = 0,...,N. Furthermore, by letting P(N) = Q(N), we have the recursive equation of P(i) (this is called the discrete time time-varying Riccati Equation): P(i) = A T (i)p(i + 1)(I + B(i)R 1 (i)b T (i)p(i + 1)) 1 A(i) + Q(i). This is called the sweep method of solving the TPBVP. The recursive equation computing P(i) s is called the discrete-time time-varying Riccati Equation (quadratic dynamics). The optimal control is now in the feedback form: u(i) = R 1 (i)b T (i)p(i + 1)x(i + 1) = (R(i) + B T (i)p(i + 1)B(i)) 1 B T (i)p(i + 1)A(i)x(i) but P(i) needs to be pre-computed. The feedback gain is sometimes called the Kalman gain. K(i) = (R(i) + B(i) T P(i)B(i)) 1 B T (i)p(i + 1)A(i) January, 004Copyrighted by John T. Wen Page 1

14 Application Consider a nonlinear system ẋ = f (x,u), x(0) = x o. Suppose that a nominal trajectory has already been generated, i.e., there exists (x (t),u (t)), t [0,T ], such that ẋ = f (x,u ), x (0) = x o. To generate a feedback control law to follow the nominal trajectory, we can first linearize the nonlinear system about the nominal trajectory: δx = f (x (t),u (t)) x δx + f (x (t),u (t)) δu; δx(0) = 0 u where δx = x x and δu = u u. This can be further discretized to a linear time varying system, e.g., δx((k+1)t s ) = (I +t s f (x (kt s ),u (kt s )) x )δx(kt s )+ f (x (kt s ),u (kt s )) δu(kt s ); δx(0) = 0. u The discrete time LQR can now be applied to generate the optimal feedback corrective control δu. January, 004Copyrighted by John T. Wen Page 13

Homework Solution # 3

Homework Solution # 3 ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found

More information

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4. Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation

More information

Static and Dynamic Optimization (42111)

Static and Dynamic Optimization (42111) Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:

More information

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Theory in Model Predictive Control : Constraint Satisfaction and Stability! Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks Ji an Luo 2008.6.6 Outline Background Problem Statement Main Results Simulation Study Conclusion Background Wireless

More information

Optimal Control Theory

Optimal Control Theory Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal

More information

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

CDS 110b: Lecture 2-1 Linear Quadratic Regulators CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra

Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra Politecnico di Milano Department of Aerospace Engineering Milan, Italy Taylor Methods and Computer Assisted Proofs

More information

Problem 1 Cost of an Infinite Horizon LQR

Problem 1 Cost of an Infinite Horizon LQR THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework

More information

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear

More information

Dynamic Optimization

Dynamic Optimization Dynamic Optimization Optimal Control Niels Kjølstad Poulsen Department of Informatics and Matematical Modelleling The Technical University of Denmark Version: 15 Januar 212 (B5) 216-11-14 11.36 2 Preface

More information

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Hamilton-Jacobi-Bellman Equation Feb 25, 2008 Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS

More information

Linear Quadratic Regulator (LQR) Design II

Linear Quadratic Regulator (LQR) Design II Lecture 8 Linear Quadratic Regulator LQR) Design II Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Outline Stability and Robustness properties

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione Introduction to Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Finite horizon optimal control 2 Consider the system At time k we want to compute

More information

Optimization-Based Control. Richard M. Murray Control and Dynamical Systems California Institute of Technology

Optimization-Based Control. Richard M. Murray Control and Dynamical Systems California Institute of Technology Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology Version v2.1b (2 Oct 218) c California Institute of Technology All rights reserved. This manuscript

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the

More information

Linear Algebra II Lecture 22

Linear Algebra II Lecture 22 Linear Algebra II Lecture 22 Xi Chen University of Alberta March 4, 24 Outline Characteristic Polynomial, Eigenvalue, Eigenvector and Eigenvalue, Eigenvector and Let T : V V be a linear endomorphism. We

More information

Pontryagin s maximum principle

Pontryagin s maximum principle Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin

More information

CBE507 LECTURE IV Multivariable and Optimal Control. Professor Dae Ryook Yang

CBE507 LECTURE IV Multivariable and Optimal Control. Professor Dae Ryook Yang CBE507 LECURE IV Multivariable and Optimal Control Professor Dae Ryook Yang Fall 03 Dept. of Chemical and Biological Engineering Korea University Korea University IV - Decoupling Handling MIMO processes

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

Lecture 4 Continuous time linear quadratic regulator

Lecture 4 Continuous time linear quadratic regulator EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon

More information

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31 Contents Preamble xiii Linear Systems I Basic Concepts 1 I System Representation 3 1 State-Space Linear Systems 5 1.1 State-Space Linear Systems 5 1.2 Block Diagrams 7 1.3 Exercises 11 2 Linearization

More information

Formula Sheet for Optimal Control

Formula Sheet for Optimal Control Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION CHAPTER 5 SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION Copyright Cengage Learning. All rights reserved. SECTION 5.4 Strong Mathematical Induction and the Well-Ordering Principle for the Integers Copyright

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department

More information

Parametric Integrated Perturbation Analysis - Sequential Quadratic Programming Approach for Minimum-time Model Predictive Control

Parametric Integrated Perturbation Analysis - Sequential Quadratic Programming Approach for Minimum-time Model Predictive Control Preprints of the 9th World Congress The International Federation of Automatic Control Cape Town, South Africa. August 4-9, 4 Parametric Integrated Perturbation Analysis - Sequential Quadratic Programming

More information

Linear Algebra II Lecture 13

Linear Algebra II Lecture 13 Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since

More information

Model predictive control of industrial processes. Vitali Vansovitš

Model predictive control of industrial processes. Vitali Vansovitš Model predictive control of industrial processes Vitali Vansovitš Contents Industrial process (Iru Power Plant) Neural networ identification Process identification linear model Model predictive controller

More information

Compositional Safety Analysis using Barrier Certificates

Compositional Safety Analysis using Barrier Certificates Compositional Safety Analysis using Barrier Certificates Department of Electronic Systems Aalborg University Denmark November 29, 2011 Objective: To verify that a continuous dynamical system is safe. Definition

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time

More information

INVERSION IN INDIRECT OPTIMAL CONTROL

INVERSION IN INDIRECT OPTIMAL CONTROL INVERSION IN INDIRECT OPTIMAL CONTROL François Chaplais, Nicolas Petit Centre Automatique et Systèmes, École Nationale Supérieure des Mines de Paris, 35, rue Saint-Honoré 7735 Fontainebleau Cedex, France,

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Static and Dynamic Optimization (42111)

Static and Dynamic Optimization (42111) Static and Dynamic Optimization (42111) Niels Kjølstad Poulsen Build. 303b, room 016 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark

More information

Deterministic Optimal Control

Deterministic Optimal Control page A1 Online Appendix A Deterministic Optimal Control As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field

Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field Vol. 4, No. 4, 23 Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field Ashraf H. Owis Department of Astronomy, Space and Meteorology, Faculty of Science, Cairo University Department

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear

More information

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case Dr. Burak Demirel Faculty of Electrical Engineering and Information Technology, University of Paderborn December 15, 2015 2 Previous

More information

Optimal Control, TSRT08: Exercises. This version: October 2015

Optimal Control, TSRT08: Exercises. This version: October 2015 Optimal Control, TSRT8: Exercises This version: October 215 Preface Acknowledgment Abbreviations The exercises are modified problems from different sources: Exercises 2.1, 2.2,3.5,4.1,4.2,4.3,4.5, 5.1a)

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 3 p 1/21 4F3 - Predictive Control Lecture 3 - Predictive Control with Constraints Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 3 p 2/21 Constraints on

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

Numerical Optimal Control Overview. Moritz Diehl

Numerical Optimal Control Overview. Moritz Diehl Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize

More information

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Optimal Control. McGill COMP 765 Oct 3 rd, 2017 Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps

More information

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem. Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 217 by James B. Rawlings Outline 1 Linear

More information

moments of inertia at the center of gravity (C.G.) of the first and second pendulums are I 1 and I 2, respectively. Fig. 1. Double inverted pendulum T

moments of inertia at the center of gravity (C.G.) of the first and second pendulums are I 1 and I 2, respectively. Fig. 1. Double inverted pendulum T Real-Time Swing-up of Double Inverted Pendulum by Nonlinear Model Predictive Control Pathompong Jaiwat 1 and Toshiyuki Ohtsuka 2 Abstract In this study, the swing-up of a double inverted pendulum is controlled

More information

Stochastic and Adaptive Optimal Control

Stochastic and Adaptive Optimal Control Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal

More information

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology 4M020 Design tools Algorithms for numerical optimization L.F.P. Etman Department of Mechanical Engineering Eindhoven University of Technology Wednesday September 3, 2008 1 / 32 Outline 1 Problem formulation:

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

Linear Quadratic Regulator (LQR) Design I

Linear Quadratic Regulator (LQR) Design I Lecture 7 Linear Quadratic Regulator LQR) Design I Dr. Radhakant Padhi Asst. Proessor Dept. o Aerospace Engineering Indian Institute o Science - Bangalore LQR Design: Problem Objective o drive the state

More information

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology Stanley B. Gershwin Massachusetts Institute of Technology The purpose of this note is to supplement the slides that describe the Karush-Kuhn-Tucker conditions. Neither these notes nor the slides are a

More information

Numerical Methods for Embedded Optimization and Optimal Control. Exercises

Numerical Methods for Embedded Optimization and Optimal Control. Exercises Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to

More information

Optimal Control. Lecture 20. Control Lyapunov Function, Optimal Estimation. John T. Wen. April 5. Ref: Papers by R. Freeman (on-line), B&H Ch.

Optimal Control. Lecture 20. Control Lyapunov Function, Optimal Estimation. John T. Wen. April 5. Ref: Papers by R. Freeman (on-line), B&H Ch. Optimal Control Lecture 20 Control Lyapunov Function, Optimal Estimation John T. Wen April 5 Ref: Papers by R. Freeman (on-line), B&H Ch. 12 Outline Summary of Control Lyapunov Function and example Introduction

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.1a, February 15, 21 c California Institute of Technology All rights reserved. This

More information

The Implicit Function Theorem with Applications in Dynamics and Control

The Implicit Function Theorem with Applications in Dynamics and Control 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition 4-7 January 2010, Orlando, Florida AIAA 2010-174 The Implicit Function Theorem with Applications in Dynamics

More information

Model Predictive Control Short Course Regulation

Model Predictive Control Short Course Regulation Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 2017 by James B. Rawlings Milwaukee,

More information

Linear Quadratic Regulator (LQR) I

Linear Quadratic Regulator (LQR) I Optimal Control, Guidance and Estimation Lecture Linear Quadratic Regulator (LQR) I Pro. Radhakant Padhi Dept. o Aerospace Engineering Indian Institute o Science - Bangalore Generic Optimal Control Problem

More information

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67 CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal

More information

OPTIMIZING PERIAPSIS-RAISE MANEUVERS USING LOW-THRUST PROPULSION

OPTIMIZING PERIAPSIS-RAISE MANEUVERS USING LOW-THRUST PROPULSION AAS 8-298 OPTIMIZING PERIAPSIS-RAISE MANEUVERS USING LOW-THRUST PROPULSION Brenton J. Duffy and David F. Chichka This study considers the optimal control problem of maximizing the raise in the periapsis

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

An Optimal Control Approach to National Settlement System Planning

An Optimal Control Approach to National Settlement System Planning An Optimal Control Approach to National Settlement System Planning Mehra, R.K. IIASA Research Memorandum November 1975 Mehra, R.K. (1975) An Optimal Control Approach to National Settlement System Planning.

More information

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73 CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007 Robotics & Automation Lecture 25 Dynamics of Constrained Systems, Dynamic Control John T. Wen April 26, 2007 Last Time Order N Forward Dynamics (3-sweep algorithm) Factorization perspective: causal-anticausal

More information

MODEL PREDICTIVE CONTROL and optimization

MODEL PREDICTIVE CONTROL and optimization MODEL PREDICTIVE CONTROL and optimization Lecture notes Model Predictive Control PhD., Associate professor David Di Ruscio System and Control Engineering Department of Technology Telemark University College

More information

Linear Quadratic Regulator (LQR) II

Linear Quadratic Regulator (LQR) II Optimal Control, Guidance and Estimation Lecture 11 Linear Quadratic Regulator (LQR) II Pro. Radhakant Padhi Dept. o Aerospace Engineering Indian Institute o Science - Bangalore Outline Summary o LQR design

More information

Lecture 11 Optimal Control

Lecture 11 Optimal Control Lecture 11 Optimal Control The Maximum Principle Revisited Examples Numerical methods/optimica Examples, Lab 3 Material Lecture slides, including material by J. Åkesson, Automatic Control LTH Glad & Ljung,

More information

Preconditioning for continuation model predictive control

Preconditioning for continuation model predictive control MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://wwwmerlcom Preconditioning for continuation model predictive control Knyazev, A; Malyshev, A TR215-112 September 215 Abstract Model predictive control (MPC)

More information

Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems.

Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems. ES22 Lecture Notes #11 Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems. J = φ(x( ) + L(x,u,t)dt ; x= f(x,u,t)

More information

Path following of a model ship using model predictive control with experimental verification

Path following of a model ship using model predictive control with experimental verification 2 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 2 FrA5.6 Path following of a model ship using model predictive control with experimental verification Reza Ghaemi, Soryeok

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Sparse quadratic regulator

Sparse quadratic regulator 213 European Control Conference ECC) July 17-19, 213, Zürich, Switzerland. Sparse quadratic regulator Mihailo R. Jovanović and Fu Lin Abstract We consider a control design problem aimed at balancing quadratic

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS

OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS STUDIES IN ASTRONAUTICS 3 OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS JOHNL.JUNKINS Texas A&M University, College Station, Texas, U.S.A. and JAMES D.TURNER Cambridge Research, Division of PRA, Inc., Cambridge,

More information

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of

More information

Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations

Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations Martino Bardi Italo Capuzzo-Dolcetta Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations Birkhauser Boston Basel Berlin Contents Preface Basic notations xi xv Chapter I. Outline

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

NONLINEAR RECEDING HORIZON CONTROL OF QUADRUPLE-TANK SYSTEM AND REAL-TIME IMPLEMENTATION. Received August 2011; revised December 2011

NONLINEAR RECEDING HORIZON CONTROL OF QUADRUPLE-TANK SYSTEM AND REAL-TIME IMPLEMENTATION. Received August 2011; revised December 2011 International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 10(B), October 2012 pp. 7083 7093 NONLINEAR RECEDING HORIZON CONTROL OF

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7. SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question

More information

Course on Model Predictive Control Part II Linear MPC design

Course on Model Predictive Control Part II Linear MPC design Course on Model Predictive Control Part II Linear MPC design Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria,

More information

Proofs Not Based On POMI

Proofs Not Based On POMI s Not Based On POMI James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University February 1, 018 Outline Non POMI Based s Some Contradiction s Triangle

More information

From Convex Optimization to Linear Matrix Inequalities

From Convex Optimization to Linear Matrix Inequalities Dep. of Information Engineering University of Pisa (Italy) From Convex Optimization to Linear Matrix Inequalities eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

arxiv: v1 [cs.sy] 22 Jun 2017

arxiv: v1 [cs.sy] 22 Jun 2017 Ball in double hoop: demonstration model for numerical optimal control Martin Gurtner Jiří Zemánek arxiv:176.7333v1 [cs.sy] 22 Jun 217 Faculty of Electrical Engineering Czech Technical University in Prague

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Deterministic Optimal Control

Deterministic Optimal Control Online Appendix A Deterministic Optimal Control As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein (1879

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2? Applied Numerical Analysis Interpolation Lecturer: Emad Fatemizadeh Interpolation Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. 0 1 4 1-3 3 9 What for x = 1.? Interpolation

More information

Optimal control of nonlinear systems with input constraints using linear time varying approximations

Optimal control of nonlinear systems with input constraints using linear time varying approximations ISSN 1392-5113 Nonlinear Analysis: Modelling and Control, 216, Vol. 21, No. 3, 4 412 http://dx.doi.org/1.15388/na.216.3.7 Optimal control of nonlinear systems with input constraints using linear time varying

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

MODERN CONTROL DESIGN

MODERN CONTROL DESIGN CHAPTER 8 MODERN CONTROL DESIGN The classical design techniques of Chapters 6 and 7 are based on the root-locus and frequency response that utilize only the plant output for feedback with a dynamic controller

More information

Linear Algebra I Lecture 10

Linear Algebra I Lecture 10 Linear Algebra I Lecture 10 Xi Chen 1 1 University of Alberta January 30, 2019 Outline 1 Gauss-Jordan Algorithm ] Let A = [a ij m n be an m n matrix. To reduce A to a reduced row echelon form using elementary

More information