Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Similar documents
EE C128 / ME C134 Feedback Control Systems

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

Distributed and Real-time Predictive Control

4F3 - Predictive Control

MPC for tracking periodic reference signals

Postface to Model Predictive Control: Theory and Design

Stochastic Tube MPC with State Estimation

Decentralized and distributed control

Improved MPC Design based on Saturating Control Laws

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Further results on Robust MPC using Linear Matrix Inequalities

Course on Model Predictive Control Part III Stability and robustness

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

Nonlinear Model Predictive Control Tools (NMPC Tools)

IEOR 265 Lecture 14 (Robust) Linear Tube MPC

Robustly stable feedback min-max model predictive control 1

4F3 - Predictive Control

Principles of Optimal Control Spring 2008

Model Predictive Control Short Course Regulation

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione

ROBUST CONSTRAINED PREDICTIVE CONTROL OF A 3DOF HELICOPTER MODEL WITH EXTERNAL DISTURBANCES

On the Inherent Robustness of Suboptimal Model Predictive Control

Fast Model Predictive Control with Soft Constraints

Course on Model Predictive Control Part II Linear MPC design

Part II: Model Predictive Control

MODEL PREDICTIVE SLIDING MODE CONTROL FOR CONSTRAINT SATISFACTION AND ROBUSTNESS

Nonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach

Stability and feasibility of MPC for switched linear systems with dwell-time constraints

On the Inherent Robustness of Suboptimal Model Predictive Control

On robustness of suboptimal min-max model predictive control *

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees

A new low-and-high gain feedback design using MPC for global stabilization of linear systems subject to input saturation

Linear-Quadratic Optimal Control: Full-State Feedback

A Stable Block Model Predictive Control with Variable Implementation Horizon

Enlarged terminal sets guaranteeing stability of receding horizon control

Continuous-time linear MPC algorithms based on relaxed logarithmic barrier functions

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

A SIMPLE TUBE CONTROLLER FOR EFFICIENT ROBUST MODEL PREDICTIVE CONTROL OF CONSTRAINED LINEAR DISCRETE TIME SYSTEMS SUBJECT TO BOUNDED DISTURBANCES

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case

Optimized Robust Control Invariant Tubes. & Robust Model Predictive Control

LINEAR TIME VARYING TERMINAL LAWS IN MPQP

Nonlinear Model Predictive Control for Periodic Systems using LMIs

ARTICLE IN PRESS Journal of Process Control xxx (2011) xxx xxx

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

Giulio Betti, Marcello Farina and Riccardo Scattolini

Robust Learning Model Predictive Control for Uncertain Iterative Tasks: Learning From Experience

Real Time Economic Dispatch for Power Networks: A Distributed Economic Model Predictive Control Approach

Robust Adaptive MPC for Systems with Exogeneous Disturbances

Optimizing Economic Performance using Model Predictive Control

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

arxiv: v2 [math.oc] 15 Jan 2014

MPC Infeasibility Handling

Economic model predictive control with self-tuning terminal weight

Homework Solution # 3

Economic and Distributed Model Predictive Control: Recent Developments in Optimization-Based Control

Optimal control and estimation

ECE7850 Lecture 9. Model Predictive Control: Computational Aspects

Decentralized and distributed control

arxiv: v1 [cs.sy] 28 May 2013

Regional Solution of Constrained LQ Optimal Control

Time-Invariant Linear Quadratic Regulators!

Numerical Methods for Model Predictive Control. Jing Yang

CONSTRAINED MODEL PREDICTIVE CONTROL ON CONVEX POLYHEDRON STOCHASTIC LINEAR PARAMETER VARYING SYSTEMS. Received October 2012; revised February 2013

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Exam. 135 minutes, 15 minutes reading time

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

arzelier

Robust output feedback model predictive control of constrained linear systems

MODEL PREDICTIVE CONTROL FUNDAMENTALS

On the design of Robust tube-based MPC for tracking

Tube Model Predictive Control Using Homothety & Invariance

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

A FAST, EASILY TUNED, SISO, MODEL PREDICTIVE CONTROLLER. Gabriele Pannocchia,1 Nabil Laachi James B. Rawlings

Optimal Control. Lecture 3. Optimal Control of Discrete Time Dynamical Systems. John T. Wen. January 22, 2004

Piecewise-affine functions: applications in circuit theory and control

ESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures

A CHANCE CONSTRAINED ROBUST MPC AND APPLICATION TO HOT ROLLING MILLS

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

Finite horizon robust model predictive control with terminal cost constraints

MPC: Tracking, Soft Constraints, Move-Blocking

Robust Explicit MPC Based on Approximate Multi-parametric Convex Programming

Linear Offset-Free Model Predictive Control

Technical work in WP2 and WP5

Prediktivno upravljanje primjenom matematičkog programiranja

MOST control systems are designed under the assumption

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects


ESC794: Special Topics: Model Predictive Control

Fast model predictive control based on linear input/output models and bounded-variable least squares

Indirect Adaptive Model Predictive Control for Linear Systems with Polytopic Uncertainty

Economic MPC using a Cyclic Horizon with Application to Networked Control Systems

Lecture 13: Constrained optimization

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

An Introduction to Model-based Predictive Control (MPC) by

Transcription:

Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL

Example: Cessna Citation Aircraft Linearized continuous-time model: (at altitude of 5m and a speed of 128.2 m/sec) 1.2822.98 ẋ = 1 5.4293 1.8366 x + 128.2 128.2 1 y = x 1 Input: elevator angle States: x 1 : angle of attack, x 2 : pitch angle, x 3 : pitch rate, x 4 : altitude Outputs: pitch angle and altitude Constraints: elevator angle ±.262rad (±15º), elevator rate ±.524rad (±6º) pitch angle ±.349 (±3º) horizon Open-loop response is unstable (open-loop poles:,, -1.5594±2.29i ).3 17 u V Angle of attack Pitch angle [J. Maciejowski, Predictive Control with constraints, 22] "#$%#&'('$)#*$+( +,$*#$*- 2

LQR and Linear MPC with quadratic cost Quadratic performance measure Linear system dynamics x + = Ax + Bu y = Cx + Du Linear constraints on inputs and states LQR: J (x) = min x,u i= x i+1 x x T i Qx i + u T i Ru i = Ax i + Bu i = x MPC: J (x) = min x,u N 1 Q = Q T,R = R T MPC problem can be translated into a quadratic program (QP) i= x i+1 x x T i Cx i + Du i Qx i + u T i Ru i = Ax i + Bu i = x b "#$%#&'('$)#*$+( +,$*#$*- 3

Linear MPC with linear costs Linear performance measure (e.g. economically motivated cost, errors) Linear system dynamics x + = Ax + Bu y = Cx + Du Linear constraints on states and inputs Resulting MPC problem: J (x) = min x,u Optimization problem can be translated into a linear program (LP) for p =1/. N 1 i= Qx i p + Ru i p x i+1 = Ax i + Bu i Cx i + Du i x b = x "#$%#&'('$)#*$+( +,$*#$*- 4

Example: Cessna Citation Aircraft" LQR with saturation Altitude x 4 (m) Linear quadratic regulator with saturated inputs: At time t = the plane is flying with a deviation of 1m of the desired altitude, i.e. x ()=[; ; ; 1] Elevator angle u (rad) 2 1 1 2 2 4 6 8 1 2 Time (sec).5 Input is saturated.5 2 4 6 8 1 Time (sec) 2 1 1 Pitch angle x 2 (rad) Problem parameters: Sampling time T s =.25sec, Q=I, R=1 Closed-loop system is unstable Applying LQR control and saturating the controller can lead to instability "#$%#&'('$)#*$+( +,$*#$*- 5

Example: Cessna Citation 5 aircraft" MPC with bound constraints on inputs MPC controller with input constraints u i ".262 Problem parameters: Sampling time T s =.25sec, Q=I, R=1, N=1 Altitude x 4 (m) Elevator angle u (rad) 4 2 2 4 2 4 6 8 1 1 Time (sec).5.5 2 4 6 8 1 Time (sec) 1.5.5 Pitch angle x 2 (rad) The MPC controller uses the knowledge that the elevator will saturate, but it does not consider the rate constraints. System does not converge to desired steady-state but to a limit cycle "#$%#&'('$)#*$+( +,$*#$*- 6

Example: Cessna Citation Aircraft" MPC with all input constraints MPC controller with input constraints u i ".262 and rate constraints u i.349 approximated by u k u k 1.349T s,u 1 = u prev Altitude x 4 (m) Elevator angle u (rad) 2 1 1 2 4 6 8 1.4 Time (sec).2.1.1.2 2 4 6 8 1 Time (sec).2.2 Pitch angle x 2 (rad) Problem parameters: Sampling time T s =.25sec, Q=I, R=1, N=1 The MPC controller considers all constraints on the actuator Closed-loop system is stable Efficient use of the control authority "#$%#&'('$)#*$+( +,$*#$*- 7

Example: Cessna Citation Aircraft" Inclusion of state constraints MPC controller with input constraints u i ".262 and rate constraints u i.349 approximated by u k u k 1.349T s,u 1 = u prev Problem parameters: Sampling time T s =.25sec, Q=I, R=1, N=1 Altitude x 4 (m) Elevator angle u (rad) 15 1 5 5 2 4 6 8 1 1 Time (sec).5 Pitch angle #-.9, i.e. -5.5 2 4 6 8 1 Time (sec).5.5 Pitch angle x 2 (rad) Increase step: At time t = the plane is flying with a deviation of 1m of the desired altitude, i.e. x ()=[; ; ; 1] Pitch angle too large during transient "#$%#&'('$)#*$+( +,$*#$*- 8

Example: Cessna Citation Aircraft" Inclusion of state constraints MPC controller with input constraints u i ".262 and rate constraints u i.349 approximated by u k u k 1.349T s,u 1 = u prev Problem parameters: Sampling time T s =.25sec, Q=I, R=1, N=1 Altitude x 4 (m) Elevator angle u (rad) 15 1 5.2 5 2 4 6 8 1.4 Time (sec).5 Constraint on pitch angle active.5 2 4 6 8 1 Time (sec).4.2 Pitch angle x 2 (rad) Add state constraints for passenger comfort: x 2 ".349 "#$%#&'('$)#*$+( +,$*#$*- 9

Example: Cessna Citation Aircraft" Shorter horizon causes loss of stability MPC controller with input constraints u i ".262 and rate constraints u i.349 approximated by u k u k 1.349T s,u 1 = u prev Problem parameters: Sampling time T s =.25sec, Q=I, R=1, N=4 Altitude x 4 (m) Elevator angle u (rad) 2 2 2 4 6 8 1.5 Time (sec).5.5 2 4 6 8 1 Time (sec).5 Pitch angle x 2 (rad) Decrease in the prediction horizon causes loss of the stability properties This part of the workshop: How can constraint satisfaction and stability in MPC be guaranteed? "#$%#&'('$)#*$+( +,$*#$*- 1

Outline Motivating Example: Cessna Citation Aircraft Constraint satisfaction and stability in MPC Main idea Problem setup for the linear quadratic case How to prove constraint satisfaction and stability in MPC Implementation Theory extensions "#$%#&'('$)#*$+( +,$*#$*- 11

Loss of feasibility and stability guarantees What can go wrong with standard MPC approach? No feasibility guarantee, i.e. the MPC problem may not have a solution No stability guarantee, i.e. trajectories may not converge to the origin Definition: Feasible set X N x N X N := {x [u,...,u N 1 ] Cu i + Dx i b, i =1,...,N} min x,u N 1 i= x T i Qx i + u T i Ru i x i+1 = Ax i + Bu i x Cx i + Du i = x b "#$%#&'('$)#*$+( +,$*#$*- 12

Example: Loss of feasibility Consider the double integrator: subject to the input constraints.5 u.5 5 5 and the state constraints x. 5 5 1 We choose N =3,Q=,R = 1. Set of initial 4 1 3.5 Time step 1: x ()=[-4 ; 4], u * (x)=-.5 Time step 2: x (1)=[ ; 3], u * (x)=-.5 Time step 3: x (2)=[ 3 ; 2] MPC problem infeasible "#$%#&'('$)#*$+( +,$*#$*- x + = 1 1 1 x 2 x + 3 2.5 2 1.5 1.5 u 1 x() feasible states (feasible set) x(1) 5 5 x 1 x(3) 13

Example: Loss of stability Consider the unstable system: subject to the input constraints and the state constraints 1 We choose Q = 1 and investigate the stability properties for different horizons N and weights R by solving the finite horizon MPC problem in a receding horizon fashion x + = 1 1 2 1 x +.5 1 u 1 x 1 1 1 u "#$%#&'('$)#*$+( +,$*#$*- 14

Example: Loss of stability R=1, N=2 R=2, N=3 R=1, N=4 * Initial points leading to trajectories that converge to the origin " Intitial points that diverge 1 x 2 1 5 5 1 1 5 5 1 1 x 1 5 5 x 2 x 2 5 5 1 1 5 5 1 1 1 5 5 1 x 1 x 1 Parameters have complex effect on closed-loop trajectory "#$%#&'('$)#*$+( +,$*#$*- 15

Feasibility and stability in MPC Main Idea Main idea: Introduce terminal cost and constraints to explicitly ensure stability and feasibility: How to choose P and X f? J (x) = min x,u "#$%#&'('$)#*$+( +,$*#$*- N 1 i= x T i Qx i + u T i Ru i + x T N Px N x i+1 = Ax i + Bu i Cx i + Du i x N x b X f = x Terminal constraint Terminal cost X f t x N x =x 16

How to choose the terminal set and cost" Main Idea Problems originate from the use of a short sight strategy Finite horizon causes deviation between the open-loop prediction and the closed-loop system: x 2 5 Closed-loop trajectories 5 5 5 x 1 Open-loop predictions x 2 Set of feasible initial states for open-loop prediction 5 5 5 Set of initial states leading to feasible closedloop trajectories Ideally we would solve the MPC problem with an infinite horizon, but that is computationally intractable Design finite horizon problem such that it approximates the infinite horizon 5 x 1 "#$%#&'('$)#*$+( +,$*#$*- 17

How to choose the terminal cost We can split the infinite horizon problem into two subproblems: Up to time k =N, where the constraints may be active J (x) = min x,u N 1 i= x T i Qx i + u T i Ru i x i+1 = Ax i + Bu i Cx i + Du i x b = x Bound the tail of the infinite horizon cost from N to using the LQR control law u=k LQR x x N T P x N is the corresponding infinite horizon cost P is the solution of the discrete-time algebraic Riccati equation Choice of N such that constraint satisfaction is guaranteed? For k >N, where there are no constraints active + min xi T Qx i + ui T Ru i x,u i=n x i+1 = Ax i + Bu i, + x N T P x N Unconstrained LQR starting from state x N "#$%#&'('$)#*$+( +,$*#$*- 18

How to choose the terminal set Terminal constraint provides a sufficient condition for constraint satisfaction : N 1 J (x) = min x Infinite horizon cost i T Qx i + ui T Ru i + xn T Px N x,u starting from x i= N x X f i+1 = Ax i + Bu i Cx t i + Du i b x N X f x =x x = x All input and state constraints are satisfied for the closed-loop system using the LQR control law for x X f Terminal set is often defined by linear or quadratic constraints The bound holds in the terminal set and is used as a terminal cost The terminal set defines the terminal constraint In the following: Show that this problem setup provides feasibility and stability x N "#$%#&'('$)#*$+( +,$*#$*- 19

Outline Motivating Example: Cessna Citation Aircraft Constraint satisfaction and stability in MPC Main idea Problem setup for the linear quadratic case How to prove constraint satisfaction and stability in MPC Implementation Theory extensions "#$%#&'('$)#*$+( +,$*#$*- 2

Formalize goals: " Definition of feasibility and stability Goal 1: Feasibility at all times Definition: Recursive feasibility The MPC problem is called recursively feasible, if for all feasible initial states feasibility is guaranteed at every state along the closed-loop trajectory. Goal 2: Stability Definition: Lyapunov stability x(k + 1) = Ax(k) +Bκ(x(k)) = f κ (x(k)) X > δ() > x() X % x() δ() x(k) < k N. $ x() "#$%#&'('$)#*$+( +,$*#$*- 21

Employed concept for the analysis of feasibility: " Invariant sets Definition: Invariant set O x(k + 1) = f κ (x(k)) x(k) O f κ (x(k)) O, k N. The positively invariant set that contains every closed positively invariant set is called the maximal positively invariant set O. Infeasible after two steps Infeasible after one step O Invariant Recursively feasible "#$%#&'('$)#*$+( +,$*#$*- 22

Analysis of Lyapunov stability Lyapunov stability will be analyzed using Lyapunov s direct method: Definition: Lyapunov function X x(k + 1) = f κ (x(k)) V : X R 1 + X x X V (x) > x =,V() =, V (x(k + 1)) V (x(k)) Theorem: (e.g. [Vidyasagar, 1993]) X X 1 For simplicity it is assumed that V(x) is continuous. This assumption can be relaxed by requiring an additional state dependent upper bound on V (x), see e.g. [Rawlings & Mayne, 29] "#$%#&'('$)#*$+( +,$*#$*- 23

How to prove feasibility and stability of MPC Main steps: Prove recursive feasibility by showing the existence of a feasible control sequence at all time instants when starting from a feasible initial point NOTE: Recursive feasibility does not imply stability of the closed-loop system Prove stability by showing that the optimal cost function is a Lyapunov function We will discuss two main cases in the following: Terminal constraint at zero: x N = Terminal constraint in some (convex) set: x N & X f For simplicity, we use the more general notation: J (x) = min x,u N 1 i= l(x i,u i )+V f (x N ) stage cost terminal cost ( In the quadratic case: l(x i,u i )=x T Qx i + ui T Ru i,v f (x N )=xn T Px N ) i "#$%#&'('$)#*$+( +,$*#$*- 24

Stability of MPC Zero terminal state constraint Terminal constraint x N = : Assume feasibility of x and let [u *, u 1*,...,u N * -1 ] be the optimal control sequence computed at x At x (k + 1) the control sequence [u 1*, u 2*,...,u N * -1, ] is feasible (apply control input to stay at the origin) Recursive feasibility The associated cost function value is given by: We obtain for the optimal solution x 1 x = x x 5 = x 4 x 2 x 3 "#$%#&'('$)#*$+( +,$*#$*- 25

Stability of MPC Zero terminal state constraint Terminal constraint x N = : Assume feasibility of x and let [u *, u 1*,...,u N * -1 ] be the optimal control sequence computed at x At x + the control sequence [u 1*, u 2*,...,u N * -1, ] is feasible (apply control input to stay at the origin) Recursive feasibility The associated cost function value is given by: We obtain for the optimal solution x = x x x 5 = x 4 x 1 = x + 2 x 3 "#$%#&'('$)#*$+( +,$*#$*- 26

Stability of MPC Zero terminal state constraint Terminal constraint x N = : Assume feasibility of x and let [u *, u 1*,...,u N * -1 ] be the optimal control sequence computed at x At x + the control sequence [u 1*, u 2*,...,u N * -1, ] is feasible (apply control input to stay at the origin) Recursive feasibility The associated cost function value is given by: We obtain for the optimal solution J (x) J(x) J (x + ) J (x) J(x + ) J (x) l(x,u ) J * (x) is a Lyapunov function (Lyapunov) Stability "#$%#&'('$)#*$+( +,$*#$*- J(x + )=J (x) l(x,u )+l(, ) Subtract cost at stage Add cost for staying at x = x x x 5 = x 4 x 1 = x + 2 x 3 27

Extension to more general terminal sets Problem: The terminal constraint x N = reduces the feasible set Goal: " Use convex set for X f that contains the origin in its interior Example: 3 Feasible set for x N & X f 2 Feasible set for x N = 1 x 2 1 2 3 6 4 2 2 4 6 "#$%#&'('$)#*$+( +,$*#$*- x 1 How can the proof be generalized to the constraint x N X f? X f Double Integrator: x + = 1 1 1 5 x 5 N =5,Q= x + 5 5 1 1 1 u.5 u.5,r = 1 28

Stability of MPC Main result Consider that the following standard assumptions hold: The stage cost is a positive definite function, i.e. it is strictly positive and only zero at the origin The terminal set is invariant under the local control law ' f (x): x + = Ax + Bκ f (x) X f x X f All state and input constraints are satisfied in X f : X f X, κ f (x) U x X f The terminal cost is a continuous Lyapunov function in the terminal set X f : V f (x + ) V f (x) l(x,κ f (x)) x X f The closed-loop system under the MPC control law is stable in the feasible set X N. "#$%#&'('$)#*$+( +,$*#$*- 29

Stability of MPC Outline of the proof Assume feasibility of x and let [u *, u 1*,...,u N * -1 ] be the optimal control sequence computed at x At x (k + 1) the control sequence [u 1*, u 2*,...,' f (x N* )] is feasible: x N is in X f ' f (x N* ) is feasible and x N+ =Ax N* +B ' f (x N* ) in X f Terminal constraint provides recursive feasibility The associated cost function value is given by: x = x x 5 X f x 4 x 1 x 2 x 3 V f (x) is a Lyapunov function: " We obtain for the optimal solution : J * (x) is a Lyapunov function (Lyapunov) Stability "#$%#&'('$)#*$+( +,$*#$*- 3

Stability of MPC Outline of the proof Assume feasibility of x and let [u *, u 1*,...,u N * -1 ] be the optimal control sequence computed at x At x + the control sequence [u 1*, u 2*,...,' f (x N* )] is feasible: x N is in X f ' f (x N* ) is feasible and x N+ =Ax N* +B ' f (x N* ) in X f Terminal constraint provides recursive feasibility The associated cost function value is given by: x = x x 6 x 5 Xf x 4 x 1 = x + x 2 x 3 V f (x) is a Lyapunov function: " We obtain for the optimal solution : J * (x) is a Lyapunov function (Lyapunov) Stability "#$%#&'('$)#*$+( +,$*#$*- 31

Stability of MPC Outline of the proof Assume feasibility of x and let [u *, u 1*,...,u N * -1 ] be the optimal control sequence computed at x At x + the control sequence [u 1*, u 2*,...,' f (x N* )] is feasible: x N is in X f ' f (x N* ) is feasible and x N+ =Ax N* +B ' f (x N* ) in X f Terminal constraint provides recursive feasibility The associated cost function value is given by: x = x x 6 x 5 X f x 4 x 1 = x + x 2 x 3 J(x) =J (x) l(x,u )+V f ( x N+1 ) V f (x N )+l(x N, κ f (x N )) V f (x) is a Lyapunov function: " We obtain for the optimal solution J (x) J(x) : J (x + ) J (x) J(x + ) J (x) l(x,u ) J * (x) is a Lyapunov function (Lyapunov) Stability "#$%#&'('$)#*$+( +,$*#$*- 32

Stability of MPC Remarks The terminal set X f and the terminal cost ensure recursive feasibility and stability of the closed-loop system. But: the terminal constraint usually reduces the region of attraction. If the open-loop system is stable, X f can be chosen as the positively invariant set of the system under zero control input, which is feasible. Often no terminal set X f, but N is required to be sufficiently large to ensure recursive feasibility Applied in practice, makes MPC work without terminal constraint Determination of a sufficiently long horizon difficult "#$%#&'('$)#*$+( +,$*#$*- 33

Proof of asymptotic stability Definition: Asymptotic stability Extension of Lyapunov s direct method: (see e.g. [Vidyasagar, 1993]) If the continuous Lyapunov function additionally satisfies V (x(k + 1)) V (x(k)) < x then the closed loop system converges to the origin and is hence asymptotically stable. Recall: Decrease of the optimal MPC cost was given by X x(k + 1) = f κ (x(k)) X % X lim k x(k) = x() X J (x(k + 1)) J (x(k)) l(x(k),u ) where the stage cost was assumed to be positive and only at. The closed-loop system under the MPC control law is asymptotically stable. $ x() "#$%#&'('$)#*$+( +,$*#$*- 34

Extension to nonlinear MPC Consider the nonlinear system dynamics: Nonlinear MPC problem: J (x) = min x,u N 1 Presented assumptions on the terminal set and cost did not rely on linearity Lyapunov stability is a general framework to analyze stability of nonlinear dynamic systems Results can be directly extended to nonlinear systems. i= l(x i,u i )+V f (x N ) x i+1 = f (x i,u i ) Cx i + Du i b x N X f, x = x, x + = f (x,u) "#$%#&'('$)#*$+( +,$*#$*- 35

Outline Motivating Example: Cessna Citation Aircraft Constraint satisfaction and stability in MPC Main idea Problem setup for the linear quadratic case How to prove constraint satisfaction and stability in MPC Implementation Theory extensions "#$%#&'('$)#*$+( +,$*#$*- 36

Multi-Parametric Toolbox Modeling Optimal control Analysis Computational geometry MPT Toolbox: M. Kvasnica, P. Grieder and M. Baotic Real-Time Model Predictive Control via Multi-Parametric Programming: Theory and Tools Michal Kvasnica ISBN: 363926444 "#$%#&'('$)#*$+( +,$*#$*- 37

Implementation using Matlab and MPT 1. Compute terminal weight P and LQR control law K by solving the discrete time Riccati equation Matlab [K,P]=dlqr(A,B,Q,R) 2. Compute terminal set X f : Ellipsoidal invariant set of the form Can be written in the form of a Linear Matrix inequality (LMI) [Boyd et al., LMIs in System and Control Theory, 1994] Polytopic invariant set of the form X f := {x x T P E x 1} X f := {x Hx K} Double Integrator: x + = 1 1 1 5 x 5 N =5,Q= x + 5 5 1 1 1 u.5 u.5,r = 1 X f h i "#$%#&'('$)#*$+( +,$*#$*- 38

Implementation using Matlab and MPT:" Polytopic Invariant Terminal Set Linear system x + = Ax + Bu LQR controller u = K x Input and state constraints U = {u u min u u max } X = {x x min x x max } M PT X = polytope([i;-i],[xmax; -xmin]) % State constraints U = polytope([k;-k],[umax; -umin]) % Input constraints Compute maximum invariant set that satisfies the constraints O = {x k (A + BK) k x k O, k } M PT P = X & U while 1 Pprev = P; P = P & inv(a+bk)*p; if Pprev == P, break; end end Iteration 2 Iteration 1 Iteration 3 "#$%#&'('$)#*$+( +,$*#$*- 39

Implementation using Matlab and MPT:" Polytopic Invariant Terminal Set Linear system x + = Ax + Bu LQR controller u = K x Input and state constraints U = {u u min u u max } X = {x x min x x max } M PT X = polytope([i;-i],[xmax; -xmin]) % State constraints U = polytope([k;-k],[umax; -umin]) % Input constraints Compute maximum invariant set that satisfies the constraints O = {x k (A + BK) k x k O, k } M PT P = X & U while 1 Pprev = P; P = P & inv(a+bk)*p; if Pprev == P, break; end end Iteration 2 Iteration 1 Iteration 3 Simpler procedure M PT X = mpt_infset(a+bk, X&U, 1); "#$%#&'('$)#*$+( +,$*#$*- 4

Example: Cessna Citation Aircraft" Revisited MPC controller with input constraints u i ".262 and rate constraints u i.349 approximated by u k u k 1.349T s,u 1 = u prev Altitude x 4 (m) Elevator angle u (rad) 2 2 2 4 6 8 1.5 Time (sec).5.5 2 4 6 8 1 Time (sec).5 Pitch angle x 2 (rad) Problem parameters: Sampling time T s =.25sec, Q=I, R=1, N=4 Decrease in the prediction horizon causes loss of the stability properties "#$%#&'('$)#*$+( +,$*#$*- 41

Example: Cessna Citation Aircraft" Terminal cost and constraint provide stability guarantee MPC controller with input constraints u i ".262 and rate constraints u i.349 approximated by u k u k 1.349T s,u 1 = u prev Problem parameters: Sampling time T s =.25sec, Q=I, R=1, N=4 Altitude x 4 (m) Elevator angle u (rad) 2 1.2 1 2 4 6 8 1.4 Time (sec).2.1.1.2 2 4 6 8 1 Time (sec).2 Pitch angle x 2 (rad) Inclusion of terminal cost and constraint provides stability "#$%#&'('$)#*$+( +,$*#$*- 42

Outline Motivating Example: Cessna Citation Aircraft Constraint satisfaction and stability in MPC Main idea Problem setup for the linear quadratic case How to prove constraint satisfaction and stability in MPC Implementation Theory extensions Reference Tracking Uncertain Systems Soft constrained MPC "#$%#&'('$)#*$+( +,$*#$*- 43

Extensions " Reference Tracking Consider the system x + = Ax + Bu Regulation vs. Tracking: y = Cx Regulation: Reject disturbances around one desirable steady-state Tracking: Make output follow a given reference r Steady-state computation: Compute steady-state that yields the desired output [ ][ ] I A B xss C u ss Note: Many solutions possible For a solution to exist the matrix must be full column rank and the steadystate must be feasible, i.e.x ss X,u ss U = [ ] r "#$%#&'('$)#*$+( +,$*#$*- 44

Example: Cessna Citation Aircraft" Reference tracking Track given altitude profile: Altitude x 4 (m) 2 2 5 1 15 2.5 Time (sec).5 Pitch angle x 2 (rad) Problem parameters: Sampling time T s =.25sec, Q=I, R=1, N=1 Elevator angle u (rad).2.2 5 1 15 2 Time (sec) "#$%#&'('$)#*$+( +,$*#$*- 45

Extensions " Reference Tracking MPC problem for reference tracking: Penalize the deviation from the desired steady-state J (x) = min x,u N 1 (x i x ss ) T Q(x i x ss )+(u i u ss ) T R(u i u ss )+(x N x ss ) T P (x N x ss ) i= x i+1 = Ax i + Bu i Cx i + Du i x b = x Steady-state is computed at each time step Here: Reference assumed constant over horizon. If reference trajectory is known this can be included Preview Same problem structure as the standard MPC problem with additional parameter r Steady-state offset: If the model is not accurate the output will show an offset from the desired reference "#$%#&'('$)#*$+( +,$*#$*- Note: Parameterized terminal set required for stability. 46

Extensions " Offset-free reference tracking Goal: Zero steady-state tracking error, i.e. Consider the augmented model: x + = Ax + Bu + B w w y = Cx + C w w with w & R w. [ ] A I Bw Assume size of w =number of states, C w =I and det. C I Then the augmented system is observable. We can correct for the disturbance by choosing x ss, u ss such that [ ][ ] [ ] I A B xss Bw ŵ = C r C w ŵ u ss where ŵ is an estimate of the uncertainty obtained from a state observer. See [Mäder et al., Automatica 29] for more details. y(k) r(k) k "#$%#&'('$)#*$+( +,$*#$*- 47

Extensions" Uncertain systems In practice, the nominal system model will not be accurate due to model mismatch or external disturbances that are acting on the system. Consider the uncertain system x + = Ax + Bu+B w w where w &W is a bounded disturbance. Stability: In the presence of uncertainties, asymptotic stability of the origin can often not be achieved. Instead, it can be shown that the trajectories converge to a neighborhood of the origin. Stability can be analyzed in the framework of input-to-state stability. Idea: Effect of the uncertainty on the system is bounded and depends on the size of the uncertainty. Feasibility? "#$%#&'('$)#*$+( +,$*#$*- 48

Extensions" Approaches for uncertain systems x terminal set t x + =Ax+Bu+B w w Trajectories differ from what is expected using the nominal system model Uncertainties can lead to infeasibility of the MPC problem Soft constrained MPC: Idea: Tolerate temporary violation of state constraints by constraint relaxation Feasibility of the optimization problem despite disturbances Robust MPC: Idea: Design MPC problem for the worst-case disturbance by constraint tightening Constraint satisfaction and stability of the uncertain system 49 "#$%#&'('$)#*$+( +,$*#$*-

Extensions" Soft constrained MPC Idea: Input constraints are hard (e.g. actuator limitations), state constraints may (temporarily) be violated Introduce soft state and terminal constraints by means of slack variables Introduce penalties on the slack variables in the cost J (x) = min x,u N 1 i= x i+1 = Ax i + Bu i Cx i + Du i b + i G N x N x l(x i,u i )+l ( i )+V f (x N )+l ( N ) f N + N = x Slack variables Penalties on the amount of constraint violation Standard soft constrained MPC setup does not provide stability guarantees New soft constrained MPC setup with (robust) stability properties developed in [Zeilinger et al., CDC 21] "#$%#&'('$)#*$+( +,$*#$*- 5

Further extensions Reference tracking with recursive feasibility guarantees [Gilbert et al., 1994; Bemporad 1998; Gilbert & Kolmanovsky 1999, 22; Limon et al., 28; Borrelli et al., 29; Ferramosca et al., 29; ] Move blocking: Reduce the computational complexity by fixing the inputs or its derivatives to be constant over several time steps [Li & Xi 27, 29; Cagienard et al. 27; Gondhalekar & Imura, 21; ] Stochastic MPC: Consider uncertainties that are unbounded and/or follow a certain distribution Probabilistic constraints Expected value constraints Expected value cost [Lee et al. 1998; Couchman et al., 25; Cannon et al., 27; Grancharova et al. 27; Hokayem et al., 29; Cinquemani et al., 211; ] "#$%#&'('$)#*$+( +,$*#$*- 51