IEOR 265 Lecture 14 (Robust) Linear Tube MPC

Similar documents
EE C128 / ME C134 Feedback Control Systems

ESC794: Special Topics: Model Predictive Control

4F3 - Predictive Control

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects

Decentralized and distributed control

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Giulio Betti, Marcello Farina and Riccardo Scattolini

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

Semidefinite Programming Duality and Linear Time-invariant Systems

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Denis ARZELIER arzelier

Lecture 2: Discrete-time Linear Quadratic Optimal Control

On the design of Robust tube-based MPC for tracking

Improved MPC Design based on Saturating Control Laws

ESC794: Special Topics: Model Predictive Control

Stochastic Tube MPC with State Estimation

Robust output feedback model predictive control of constrained linear systems

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

On the Inherent Robustness of Suboptimal Model Predictive Control

Tube Model Predictive Control Using Homothety & Invariance

Lyapunov Based Control

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

EML5311 Lyapunov Stability & Robust Control Design

Distributed and Real-time Predictive Control

Regional Solution of Constrained LQ Optimal Control

On the Inherent Robustness of Suboptimal Model Predictive Control

Nonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach

Optimized Robust Control Invariant Tubes. & Robust Model Predictive Control

Provably Safe and Robust Learning-Based Model Predictive Control

ROBUSTNESS OF PERFORMANCE AND STABILITY FOR MULTISTEP AND UPDATED MULTISTEP MPC SCHEMES. Lars Grüne and Vryan Gil Palma

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

8 A First Glimpse on Design with LMIs

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

ROBUST CONSTRAINED PREDICTIVE CONTROL OF A 3DOF HELICOPTER MODEL WITH EXTERNAL DISTURBANCES

Enlarged terminal sets guaranteeing stability of receding horizon control

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

Chapter 3 Nonlinear Model Predictive Control

Linear-Quadratic Optimal Control: Full-State Feedback

Course on Model Predictive Control Part III Stability and robustness

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

Homework Solution # 3

Methods and applications of distributed and decentralized Model Predictive Control

Explicit Robust Model Predictive Control

EE363 homework 2 solutions

Decentralized control with input saturation

A Stable Block Model Predictive Control with Variable Implementation Horizon

4F3 - Predictive Control

MODEL PREDICTIVE SLIDING MODE CONTROL FOR CONSTRAINT SATISFACTION AND ROBUSTNESS

Lyapunov Stability Theory

Robust Optimization for Risk Control in Enterprise-wide Optimization

Set Robust Control Invariance for Linear Discrete Time Systems

Model Predictive Control Short Course Regulation

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

Multiplexed Model Predictive Control. K.V. Ling, J.M. Maciejowski, B. Wu bingfang

Exam. 135 minutes, 15 minutes reading time

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Economic and Distributed Model Predictive Control: Recent Developments in Optimization-Based Control

MPC for tracking periodic reference signals

Topic # Feedback Control Systems

Decentralized and distributed control

A SIMPLE TUBE CONTROLLER FOR EFFICIENT ROBUST MODEL PREDICTIVE CONTROL OF CONSTRAINED LINEAR DISCRETE TIME SYSTEMS SUBJECT TO BOUNDED DISTURBANCES

Technical work in WP2 and WP5

Nonlinear Model Predictive Control Tools (NMPC Tools)

Lecture 8 Receding Horizon Temporal Logic Planning & Finite-State Abstraction

GLOBAL STABILIZATION OF THE INVERTED PENDULUM USING MODEL PREDICTIVE CONTROL. L. Magni, R. Scattolini Λ;1 K. J. Åström ΛΛ

Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions

Feedback stabilisation with positive control of dissipative compartmental systems

Stability and feasibility of MPC for switched linear systems with dwell-time constraints

Time-Invariant Linear Quadratic Regulators!

Postface to Model Predictive Control: Theory and Design

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

5. Observer-based Controller Design

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

On the Scalability in Cooperative Control. Zhongkui Li. Peking University

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Chapter 2 Optimal Control Problem

A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control

MATH 4211/6211 Optimization Constrained Optimization

A Tour of Reinforcement Learning The View from Continuous Control. Benjamin Recht University of California, Berkeley

An Application to Growth Theory

Model Predictive Control Classical, Robust and Stochastic

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Honors Advanced Algebra Unit 3: Polynomial Functions November 9, 2016 Task 11: Characteristics of Polynomial Functions

Stabilization and Passivity-Based Control

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Robustly stable feedback min-max model predictive control 1

Robust Learning Model Predictive Control for Uncertain Iterative Tasks: Learning From Experience

Appendix A Solving Linear Matrix Inequality (LMI) Problems

Lecture Note 5: Semidefinite Programming for Stability Analysis

The Rationale for Second Level Adaptation

Stability and feasibility of state-constrained linear MPC without stabilizing terminal constraints

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

Further results on Robust MPC using Linear Matrix Inequalities

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

MOST control systems are designed under the assumption

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

Kernels and the Kernel Trick. Machine Learning Fall 2017

New Tools for Analysing Power System Dynamics

Transcription:

IEOR 265 Lecture 14 (Robust) Linear Tube MPC 1 LTI System with Uncertainty Suppose we have an LTI system in discrete time with disturbance: x n+1 = Ax n + Bu n + d n, where d n W for a bounded polytope W. This disturbance can represent, for instance, modeling error. As an example, suppose that the true system model is given by x n+1 = Ax n + Bu n + g(x n, u n ), where g(, ) is a nonlinear function that obeys g(x n, u n ) W for all x n X and u n U. Note that this is not restrictive because it can describe a general nonlinear system x n+1 = f(x n, u n ) by choosing g(x n, u n ) = f(x n, u n ) Ax n Bu n. An important question for the LTI system with uncertainty is whether a constant state-feedback controller u n = Kx n ensures that all state and input constraints are satisfied for all time. 2 Polytope Operations To be able to analyze and do design, we need to define operations on polytopes. Let U, V, W by polytopes. The Minkowski sum is U V = {u + v : u U; v V}, and the Pontryagin set difference is U V = {u : u V U}. Note that this set difference is not symmetric, meaning that in general U V = V U; however, the Minkowski sum can be seen to be symmetric by its definition. Some useful properties include (U V) V U; (U (V W)) W U V; (U V) W U (V W); T (U V) T U T V, where T is a linear transformation. 1

3 Maximum Output Admissible Disturbance Invariant Set We return to the question: Does there exist a set Ω such that if x 0 Ω, then the controller u n = Kx n ensures that both state and input constraints are satised for the LTI system with bounded disturbance. In mathematical terms, we would like this set Ω to achieve (a) constraint satisfaction Ω {x : x X ; Kx U}, and (b) disturbance invariance (A + BK)Ω W Ω. It turns out that we can compute such a set under reasonable conditions on the disturbance W and constraint sets X and U. However, there is no guarantee that this is not the null set: It may be the case that Ω =. The algorithm for computing this set is more complicated than the situation without disturbance, and so we will not discuss it in this class. 4 Optimization Formulation Recall the LTI system with disturbance: x n+1 = Ax n + Bu n + d n, where d n W. The issue with using a constant state-feedback controller u n = Kx n is that the set Ω for which this controller is admissible and ensures constraint satisfaction subject to all possible sequences of disturbances (i.e., d 0, d 1,...) can be quite small. In this sense, using this controller is conservative, and the relevant question is whether we can design a nonlinear controller u n = l(x n ) for the LTI system with disturbance such that constraints on states x n X and inputs u n U are satisfied. One idea is to use an MPC framework with robustness explicitly included. (Robust) linear tube MPC is one approach for this. The idea underlying this approach is to subtract out the effect of the disturbance from our state and input constraints. One formulation is V n (x n ) = minψ n (x n,..., x n+n, ǔ n,..., ǔ n+n 1 ) s.t. x n+1+k = Ax n+k + Bǔ n+k, k = 0,..., N 1 x n = x n x n+k X R k, k = 1,..., N 1 ǔ n+k = Kx n+k + c k, k = 0,..., N 1 ǔ n+k U KR k, k = 0,..., N 1 x n+n Ω R N where X, U, Ω are polytopes, R 0 = {0}, R k = k 1 j=0 (A + BK)j W is a disturbance tube, and N > 0 is the horizon. Note that we do not constrain the initial condition x n, and this reduces the conservativeness of the MPC formulation. We will refer to the set Ω as a terminal set. The interpretation of the function ψ n is a cost function on the states and inputs of the system. 2

5 Recursive Properties Suppose that (A, B) is stabilizable and K is a matrix such that (A + BK) is stable. For this given system and feedback controller, suppose we have a maximal output admissible disturbance invariant set Ω (meaning that this set has constraint satisfaction Ω {x : x X, Kx U} and disturbance invariance (A + BK)Ω W Ω). Next, note that conceptually our decision variables are c k since the x k, ǔ k are then uniquely determined by the initial condition and equality constraints. As a result, we will talk about solutions only in terms of the variables c k. In particular, if M n = {c n,..., c n+n 1 } is feasible for the optimization defining linear MPC with an initial condition x n, then the system that applies the control value u n = Kx n + c n [M n ] results in: Recursive Feasibility: there exists feasible M n+1 for x n+1 ; Recursive Constraint Satisfaction: x n+1 X. We will give a sketch of the proof for these two results. Choose M n+1 = {c 1 [M n ],..., c N 1 [M n ], 0}. Some algebra gives that x n+1+k [M n+1 ] = x n+1+k [M n ]+(A+BK) k d n x n+1+k [M n ] (A+BK) k W. But because M n is feasible, it must be that x n+1+k [M n ] X R k+1. Combining this gives x n+1+k [M n+1 ] x n+1+k [M n ] (A + BK) k W X (R k (A + BK) k W) (A + BK) k W X R k. This shows feasibility for the states; a similar argument can be applied to the inputs. The final state needs to be considered separately. The argument above gives x n+1+n 1 [M n+1 ] Ω R N 1. Observe that the constraint satisfaction property of Ω leads to ǔ n+1+n 1 [M n+1 ] = Kx n+1+n 1 [M n+1 ] KΩ KR N 1 U KR N 1. Lastly, note that the disturbance invariance property of Ω gives x n+1+n [M n+1 ] (A + BK)Ω (A + BK)R N 1 (Ω W) (A + BK)R N 1 = Ω R N. Since x n+1 = x n+1 [M n ] + d n (X W) W X, the result follows. There is an immediate corollary of this theorem: If there exists feasible M 0 for x 0, then the system is (a) Lyapunov stable, (b) satisfies all state and input constraints for all time, (c) feasible for all time. 3

Lastly, define X F = {x n there exists feasible M n }. It must be that Ω X F, and in general X F will be larger. There is in fact a tradeoff between the values of N and the size of the set X F. Feasibility requires being able to steer the system into Ω at time N. The larger N is, the more initial conditions can be steered to Ω and so X F will be larger. However, having a larger N means there are more variables and constraints in our optimization problem, and so we will require more computation for larger values of N. 6 Design Variables in Linear Tube MPC Consider the linear tube MPC formulation: V n (x n ) = minψ n (x n,..., x n+n, ǔ n,..., ǔ n+n 1 ) s.t. x n+1+k = Ax n+k + Bǔ n+k, k = 0,..., N 1 x n = x n x n+k X R k, k = 1,..., N 1 ǔ n+k = Kx n+k + c k, k = 0,..., N 1 ǔ n+k U KR k, k = 0,..., N 1 x n+n Ω R N where X, U, Ω are polytopes, R 0 = {0}, R k = k 1 j=0 (A + BK)j W is a disturbance tube, and N > 0 is the horizon. Though it looks complex, there are only a few design choices. In particular, we engineer the following components: x k+1 = Ax k + Bǔ k We must construct a system model from prior knowledge about the system and experimental data. X Constraints on the states come from information about the system. U Constraints on the inputs come from information about the system. W Disturbance bounds come from information about the system. Q > 0, R > 0 These are tuning parameters that manage the tradeoff between control effort (higher R penalizes strong control effort) and state deviation (higher Q more strongly penalizes state deviation from the equilibrium point). N The horizon length manages the tradeoff between performance of the controller versus computational effort (higher N requires more computation but the resulting controller has better performance). The other elements are algorithmically computed based on the design choices: K This is computed by the solution to an infinite horizon LQR problem. 4

P > 0 This is computed by the solution to an infinite horizon LQR problem. Ω This set is computed by using the K that solves the infinite horizon LQR problem, in conjunction with an existing algorithm. R k These sets are computed using standard polytope algorithms for linear transformations and Minkowski sums of polytopes. It is worth mentioning that there are more exotic MPC schemes that introduce more design parameters, in exchange for better performance. 5