EE C128 / ME C134 Feedback Control Systems

Similar documents
IEOR 265 Lecture 14 (Robust) Linear Tube MPC

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

4F3 - Predictive Control

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

Distributed and Real-time Predictive Control

4F3 - Predictive Control

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

LINEAR TIME VARYING TERMINAL LAWS IN MPQP

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

ESC794: Special Topics: Model Predictive Control

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Decentralized and distributed control

Prediktivno upravljanje primjenom matematičkog programiranja

ECE7850 Lecture 9. Model Predictive Control: Computational Aspects

Lecture 5 Linear Quadratic Stochastic Control

Postface to Model Predictive Control: Theory and Design

A Tour of Reinforcement Learning The View from Continuous Control. Benjamin Recht University of California, Berkeley

Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

Homework Solution # 3

ESC794: Special Topics: Model Predictive Control

Hybrid Systems Course Lyapunov stability

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Robustly stable feedback min-max model predictive control 1

Nonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Stochastic Tube MPC with State Estimation

MODEL PREDICTIVE SLIDING MODE CONTROL FOR CONSTRAINT SATISFACTION AND ROBUSTNESS

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Regional Solution of Constrained LQ Optimal Control

Robotics. Control Theory. Marc Toussaint U Stuttgart

Hybrid Systems - Lecture n. 3 Lyapunov stability

Robust Model Predictive Control for Autonomous Vehicle/Self-Driving Cars

Lecture 4 Continuous time linear quadratic regulator

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

A Stable Block Model Predictive Control with Variable Implementation Horizon

Improved MPC Design based on Saturating Control Laws

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Robust Learning Model Predictive Control for Uncertain Iterative Tasks: Learning From Experience

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Appendix A Solving Linear Matrix Inequality (LMI) Problems

Lecture 8 Receding Horizon Temporal Logic Planning & Finite-State Abstraction

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

Enlarged terminal sets guaranteeing stability of receding horizon control

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Linear-Quadratic Optimal Control: Full-State Feedback

Problem 1 Cost of an Infinite Horizon LQR

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

Nonlinear Model Predictive Control for Periodic Systems using LMIs

Optimal control of nonlinear systems with input constraints using linear time varying approximations

EML5311 Lyapunov Stability & Robust Control Design

A new low-and-high gain feedback design using MPC for global stabilization of linear systems subject to input saturation

Time-Invariant Linear Quadratic Regulators!

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

Automatic Control 2. Nonlinear systems. Prof. Alberto Bemporad. University of Trento. Academic year

Autonomous Mobile Robot Design

CHAPTER 2: QUADRATIC PROGRAMMING

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

On the Inherent Robustness of Suboptimal Model Predictive Control

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Adaptive Dynamic Inversion Control of a Linear Scalar Plant with Constrained Control Inputs

Nonlinear Optimal Tracking Using Finite-Horizon State Dependent Riccati Equation (SDRE)

Part II: Model Predictive Control

Robust Adaptive Attitude Control of a Spacecraft

Optimizing Economic Performance using Model Predictive Control

MPC: Tracking, Soft Constraints, Move-Blocking

FAULT-TOLERANT CONTROL OF CHEMICAL PROCESS SYSTEMS USING COMMUNICATION NETWORKS. Nael H. El-Farra, Adiwinata Gani & Panagiotis D.

Constrained interpolation-based control for polytopic uncertain systems

EE221A Linear System Theory Final Exam

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione

Piecewise-affine functions: applications in circuit theory and control

Semidefinite Programming Duality and Linear Time-invariant Systems

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Suppose that we have a specific single stage dynamic system governed by the following equation:

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

Partially Observable Markov Decision Processes (POMDPs)

LINEAR-CONVEX CONTROL AND DUALITY

Chapter 2 Optimal Control Problem

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Further results on Robust MPC using Linear Matrix Inequalities

Stability and feasibility of MPC for switched linear systems with dwell-time constraints

Autonomous navigation of unicycle robots using MPC

Optimized Robust Control Invariant Tubes. & Robust Model Predictive Control

Symmetric Constrained Optimal Control: Theory, Algorithms, and Applications. Claus Robert Danielson

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

5. Observer-based Controller Design

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

Overview of the Seminar Topic

EE363 homework 2 solutions

16.410/413 Principles of Autonomy and Decision Making

MPC Infeasibility Handling

Multi-Robotic Systems

Transcription:

EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of California Berkeley November 21, 2013 M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 1 / 34

Lecture abstract Topics covered in this presentation Constrained Optimal Control for Linear Discrete-Time Systems Basic Formulation of a Model Predictive Control Regulation Problem How to Ensure Feasibility and Stability Examples We will not have time to cover the following Theory of Convex Optimization Background on Set-Theoretic Methods in Control Proofs (stability, feasibility) Details of Real-Time Implementation Extensions (tracking problems, effect of uncertainties, etc.) M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 2 / 34

Chapter outline 1 Introduction to Model Predictive Control 1 Introduction 2 Discrete Time Constrained Optimal Control 3 The Basic MPC Formulation 4 Ensuring Feasibility and Stability 5 Example: Constrained Double Integrator 6 Extensions and Generalizations M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 3 / 34

1 Introduction 1 Introduction to Model Predictive Control 1 Introduction 2 Discrete Time Constrained Optimal Control 3 The Basic MPC Formulation 4 Ensuring Feasibility and Stability 5 Example: Constrained Double Integrator 6 Extensions and Generalizations M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 4 / 34

1 Introduction What is Model Predictive Control? Properties of Model Predictive Control (MPC) model-based prediction of future system behavior built-in constraint satisfaction (hard constraints) general formulation for MIMO systems Applications of Model Predictive Control traditionally: process industry (since the 1980 s) high-performance control applications automotive industry building automation and control still dwarfed by PID in industry, but growing M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 5 / 34

1 Introduction PID vs. MPC Cartoon courtesy of OpenI: http://openi.nlm.nih.gov M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 6 / 34

1 Introduction Example: Audi Smart Engine Vehicle Desired Speed ACC Actual Speed Design of an MPC Controller regulating the desired speed (through an Automatic Cruise Control) in order to reach the destination in the most fuel-efficient way Prediction: Max and Min Speed of traffic, Grade Constraints: Max and Min Speed (of traffic and of vehicle) Example courtesy of Prof. Borrelli M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 7 / 34

1 Introduction Receding Horizon Control M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 8 / 34

1 Introduction Receding Horizon Control Given x(t), optimize over finite horizon of length N at time t M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 8 / 34

1 Introduction Receding Horizon Control Given x(t), optimize over finite horizon of length N at time t Only apply the first optimal move u(t) of the predicted controls M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 8 / 34

1 Introduction Receding Horizon Control Given x(t), optimize over finite horizon of length N at time t Only apply the first optimal move u(t) of the predicted controls Shift the horizon by one M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 8 / 34

1 Introduction Receding Horizon Control Given x(t), optimize over finite horizon of length N at time t Only apply the first optimal move u(t) of the predicted controls Shift the horizon by one Repeat the whole optimization at time t + 1 M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 8 / 34

1 Introduction Receding Horizon Control Given x(t), optimize over finite horizon of length N at time t Only apply the first optimal move u(t) of the predicted controls Shift the horizon by one Repeat the whole optimization at time t + 1 Optimization using current measurements Feedback! M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 8 / 34

1 Introduction Important Issues in Model Predictive Control Feasibility: Optimization problem may become infeasible at some future time step Stability: Closed-loop stability not guaranteed Performance: Effect of finite prediction horizon? t+n min l(x k, u k ) vs. min l(x k, u k ) k=t k=t repeatedly Real-Time Implementation: Reliable fast computation needed Note: The above issues arise even when assuming a perfect system model and no disturbances. M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 9 / 34

2 Discrete Time Constrained Optimal Control 1 Introduction to Model Predictive Control 1 Introduction 2 Discrete Time Constrained Optimal Control 3 The Basic MPC Formulation 4 Ensuring Feasibility and Stability 5 Example: Constrained Double Integrator 6 Extensions and Generalizations M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 10 / 34

2 Discrete Time Constrained Optimal Control Discrete Time vs. Continuous Time Optimal Control Continuous time LQR J c = min u s.t. 0 x(t) T Qx(t) + u(t) T Ru(t) dt ẋ(t) = Ax(t) + Bu(t) Discrete time LQR J d = min u s.t. x T k Qx k + u T k Ru k k=0 x k+1 = A d x k + B d u k Why discrete time? software implementations of controllers are inherently discrete need to discretize sooner or later anyway M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 11 / 34

2 Discrete Time Constrained Optimal Control How to Deal With Constraints? In practice, more or less all systems are constrained input saturation (actuation limits) e.g. limited motor voltage of cart system in the lab state constraints (physical limits, safety limits) e.g. finite track length of the pendulum system Constraints u(t) U for all t, input constraint set U R p x(t) X for all t, state constraint set X R n Often assumed in MPC: U and X are polyhedra a makes x(t) X H x x(t) E x and u(t) U H u u(t) E u linear constraints a a finite intersection of half-spaces Issue: No systematic treatment by conventional methods (PID, LQR...) M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 12 / 34

2 Discrete Time Constrained Optimal Control The Naive Approach: Just Truncate! Stage problem at time t, given state x(t) N 1 min u 0,...,u N 1 k=0 x T k Qx k + u T k Ru k s.t. x k+1 = Ax k + Bu k, k = 0,..., N 1 u k U, x k+1 X, k = 0,..., N 1 x 0 = x(t) If X and U are polyhedra, this is a Quadratic Program can be solved fast, efficiently and reliably using modern solvers Basic MPC Algorithm: at time t, measure state x(t) and solve stage problem apply control u(t) = u 0 go to time t + 1 and repeat M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 13 / 34

2 Discrete Time Constrained Optimal Control The Naive Approach: Feasibility Issues Double Integrator Example [ ] [ ] 1 1 0 A = B = 0 1 1 Q = [ ] 1 0 0 1 R = 10 X = {x R 2 : x i 5, i = 1, 2}, U = {u R : u 1} M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 14 / 34

2 Discrete Time Constrained Optimal Control The Naive Approach: Feasibility Issues Double Integrator Example [ ] [ ] 1 1 0 A = B = 0 1 1 Q = [ ] 1 0 0 1 R = 10 X = {x R 2 : x i 5, i = 1, 2}, U = {u R : u 1} 3 State Trajectory 0.5 Control Input 2 0 1 0.5 x 2 0 u 1 2 3 LQR "naive" MPC, N = 3 "naive" MPC, N = 4 "naive" MPC, N = 10 5 0 5 x 1 1 u min 1.5 LQR "naive" MPC, N = 3 "naive" MPC, N = 4 "naive" MPC, N = 10 2 0 5 10 15 20 25 30 t M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 14 / 34

2 Discrete Time Constrained Optimal Control The Naive Approach: Feasibility Issues 3 2 State Trajectory 0.5 0 Control Input x 2 1 0 1 u 0.5 1 2 3 Observations LQR "naive" MPC, N = 3 "naive" MPC, N = 4 "naive" MPC, N = 10 5 0 5 x 1 LQR controller violates input constraints u min 1.5 LQR "naive" MPC, N = 3 "naive" MPC, N = 4 "naive" MPC, N = 10 2 0 5 10 15 20 25 30 t naive MPC with prediction horizon N = 3 is infeasible at t = 2 naive MPC with prediction horizon N = 4 stays feasible throughout, but has poor performance controller needs to look ahead far enough for good performance M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 15 / 34

3 The Basic MPC Formulation 1 Introduction to Model Predictive Control 1 Introduction 2 Discrete Time Constrained Optimal Control 3 The Basic MPC Formulation 4 Ensuring Feasibility and Stability 5 Example: Constrained Double Integrator 6 Extensions and Generalizations M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 16 / 34

3 The Basic MPC Formulation The Basic MPC Formulation Stage problem at time t, given state x(t) N 1 min u 0,...,u N 1 k=0 x T k Qx k + u T k Ru k + x T NP x N s.t. x k+1 = Ax k + Bu k, k = 0,..., N 1 u k U, x k+1 X, k = 0,..., N 1 x N X f x 0 = x(t) terminal weighting matrix P used to achieve stability terminal constraint set X f used to achieve persistent feasibility M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 17 / 34

3 The Basic MPC Formulation Terminal cost and Terminal Constraints Obvious: If X f = {0} and a solution to the stage problem exists for initial state x(0), then the controller is both stable and persistently feasible. setting X f = {0} may drastically reduce the set states for which the stage optimization problem is feasible if a solution does exists it might be very costly M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 18 / 34

3 The Basic MPC Formulation Terminal cost and Terminal Constraints Obvious: If X f = {0} and a solution to the stage problem exists for initial state x(0), then the controller is both stable and persistently feasible. setting X f = {0} may drastically reduce the set states for which the stage optimization problem is feasible if a solution does exists it might be very costly Definition (Region of Attraction) Given a controller u k = f(x k ), its region of attraction X 0 is X 0 = { x 0 X : lim k x k = 0, x k X, f(x k ) U, k } where x j+1 = Ax j + Bf(x j ) for each j Goal: Find a tradeoff between control performance (minimize cost) size of the region of attraction (can control larger set of states) computational complexity (grows with the prediction horizon) M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 18 / 34

3 The Basic MPC Formulation Region of Attraction Recall: Double Integrator Example [ ] [ ] [ ] 1 1 0 1 0 A = B = Q = 0 1 1 0 1 R = 10 X = {x R 2 : x i 5, i = 1, 2}, U = {u R : u 1} M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 19 / 34

3 The Basic MPC Formulation Region of Attraction Recall: Double Integrator Example [ ] [ ] [ ] 1 1 0 1 0 A = B = Q = 0 1 1 0 1 R = 10 X = {x R 2 : x i 5, i = 1, 2}, U = {u R : u 1} M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 19 / 34

4 Ensuring Feasibility and Stability 1 Introduction to Model Predictive Control 1 Introduction 2 Discrete Time Constrained Optimal Control 3 The Basic MPC Formulation 4 Ensuring Feasibility and Stability 5 Example: Constrained Double Integrator 6 Extensions and Generalizations M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 20 / 34

4 Ensuring Feasibility and Stability Some Set-Theoretic Notions Definition (Invariant Set) Given an autonomous system x k+1 = f(x k ), a set O X is (positively) invariant if x 0 O x k O for all k 0 M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 21 / 34

4 Ensuring Feasibility and Stability Some Set-Theoretic Notions Definition (Invariant Set) Given an autonomous system x k+1 = f(x k ), a set O X is (positively) invariant if x 0 O x k O for all k 0 Definition (Maximal Invariant Set) The set O is the maximal invariant set if 0 O, O is invariant and O contains all invariant sets that contain the origin. M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 21 / 34

4 Ensuring Feasibility and Stability Some Set-Theoretic Notions Definition (Invariant Set) Given an autonomous system x k+1 = f(x k ), a set O X is (positively) invariant if x 0 O x k O for all k 0 Definition (Maximal Invariant Set) The set O is the maximal invariant set if 0 O, O is invariant and O contains all invariant sets that contain the origin. Definition (Control Invariant Set) A set C X is said to be control invariant if x 0 C (u 0, u 1,... ) U s.t. x k C for all k 0 where x j+1 = Ax j + Bu j for all j M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 21 / 34

4 Ensuring Feasibility and Stability Ensuring Feasibility Task: We must make sure that, in each step, there exists a feasible control from the predicted state x N. can achieve this if X f = C for some control invariant set C idea: use the maximal invariant set of a particular controller M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 22 / 34

4 Ensuring Feasibility and Stability Ensuring Feasibility Task: We must make sure that, in each step, there exists a feasible control from the predicted state x N. can achieve this if X f = C for some control invariant set C idea: use the maximal invariant set of a particular controller Definition (Maximal LQR Invariant Set) The maximal LQR invariant set O LQR is the maximal invariant set of the system x k+1 = (A BK)x k subject to the constraint u k = Kx k U, where K is the LQR gain. Terminal Constraint: if X f = O LQR, then we are guaranteed to be persistently feasible (optimization can always choose LQR control, feasible by definition) determining O LQR is computationally feasible in some cases (but problematic in high dimensions with complicated constraint sets) M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 22 / 34

4 Ensuring Feasibility and Stability Ensuring Stability Basic Idea: Proof is based on a Lyapunov argument standard method in nonlinear control based on showing that a Lyapunov function, an energy-like function of the state, is decreasing along closed-loop system trajectories details outside the scope of this class (see EE222, ME237 for details) M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 23 / 34

4 Ensuring Feasibility and Stability Ensuring Stability Basic Idea: Proof is based on a Lyapunov argument standard method in nonlinear control based on showing that a Lyapunov function, an energy-like function of the state, is decreasing along closed-loop system trajectories details outside the scope of this class (see EE222, ME237 for details) How to ensure this: terminal weighting matrix P commonly chosen as the matrix solving the Discrete-Time Algebraic Riccati Equation: P = A T P A + Q A T P B(B T P B + R) 1 B T P A x T k P x k is the infinite horizon cost of regulating the system state to zero starting from x k, using the (unconstrained) LQR controller Note: x N O LQR starting from x N LQR is feasible (hence optimal) M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 23 / 34

4 Ensuring Feasibility and Stability Main Theoretical Result Theorem (Stability and Persistent Feasibility for Basic MPC) Suppose that 1. Q = Q T 0, R = R T 0 and P 0 2. X, X f and U are closed sets containing the origin in their interior 3. X f is control invariant, X f X 4. min u U,x + X f x T P x + x T Qx + u T Ru + (x + ) T P (x + ) 0 for all x X f, where x + = Ax + Bu Then 1. the state of the closed-loop system converges to the origin, i.e. lim k x(k) = 0 2. the origin of the closed-loop system is asymptotically stable with domain of attraction X 0 M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 24 / 34

5 Example: Constrained Double Integrator 1 Introduction to Model Predictive Control 1 Introduction 2 Discrete Time Constrained Optimal Control 3 The Basic MPC Formulation 4 Ensuring Feasibility and Stability 5 Example: Constrained Double Integrator 6 Extensions and Generalizations M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 25 / 34

5 Example: Constrained Double Integrator Example: Constrained Double Integrator Modified Double Integrator Example [ ] [ ] 1 1 0 A = B = 0 1 1 Q = [ ] 1 0 0 1 R = 10 X = {x R 2 : x i 5, i = 1, 2} U = {u R : u 1} X f = O LQR P solves DARE M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 26 / 34

5 Example: Constrained Double Integrator Example: Constrained Double Integrator Simulation Results Observations controller with N = 3 is infeasible at t = 0 same performance for controllers with N = 5 and N = 10 M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 27 / 34

6 Extensions and Generalizations 1 Introduction to Model Predictive Control 1 Introduction 2 Discrete Time Constrained Optimal Control 3 The Basic MPC Formulation 4 Ensuring Feasibility and Stability 5 Example: Constrained Double Integrator 6 Extensions and Generalizations M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 28 / 34

6 Extensions and Generalizations Extensions and Generalizations The basic ideas of MPC can be extended to MPC for reference tracking Robust MPC (robustness w.r.t disturbances / model errors) Output-feedback MPC (state only partially observed) Fast MPC for applications with fast dynamics fast solvers for online MPC explicit MPC via multiparametric programming MPC for nonlinear systems... M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 29 / 34

6 Extensions and Generalizations Example: Something of Everything Interpolated Tube-Based Robust Output Feedback MPC for Tracking 0 2 4 6 8 10 12 10 8 6 4 2 0 2 4 output-feedback robust to bounded additive disturbances set-point tracking interpolation between different terminal constraint sets allows to improve feasibility while maintaining high performance M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 30 / 34

6 Extensions and Generalizations Explicit Model Predictive Control Solving the Quadratic Program via Multiparametric Programming the stage quadratic program can be solved explicitly solution is a piece-wise affine control law region of attraction partitioned into polyhedra Lookup-Table M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 31 / 34

6 Extensions and Generalizations Conclusion Model Predictive Control is a very active research area becoming a mature technology applicable to a wide range of control problems the only control methodology taking hard state and input constraints into account explicitly Model Predictive Control is not plug and play it requires specialized knowledge inherently better than other control techniques widely accepted in industry (yet) M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 32 / 34

6 Extensions and Generalizations Want to Learn More? Recommended Classes EE 127: Optimization Models in Engineering (El Ghaoui) ME 190M: Introduction to Model Predictive Control (Borrelli) Advanced Classes EE 221A: Linear System Theory EE 227B: Convex Optimization (El Ghaoui) ME 290J: Model Predictive Control for Linear and Hybrid Systems (Borrelli) M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 33 / 34

6 Extensions and Generalizations Bibliography Francesco Borelli. ME290J: Model Predictive Control for Linear and Hybrid Systems, http://www.mpc.berkeley.edu/mpc-course-material M. Herceg et al. Multi-Parametric Toolbox 3.0, http://control.ee.ethz.ch/~mpt Maximilian Balandat. Interpolation in Output-Feedback Tube-Based Robust MPC, 50th IEEE Conference on Decision and Control and European Control Conference, 2011 Maximilian Balandat. Constrained Robust Optimal Trajectory Tracking: Model Predictive Control Approaches, Diploma Thesis, Technische Universität Darmstadt, 2010 M. Balandat (EECS, UCB) Feedback Control Systems November 21, 2013 34 / 34