4F3 - Predictive Control

Similar documents
4F3 - Predictive Control

4F3 - Predictive Control

IEOR 265 Lecture 14 (Robust) Linear Tube MPC

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Suppose that we have a specific single stage dynamic system governed by the following equation:

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione

Optimal control and estimation

EE C128 / ME C134 Feedback Control Systems

Linear-Quadratic Optimal Control: Full-State Feedback

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

Model Predictive Control Short Course Regulation

Optimal Polynomial Control for Discrete-Time Systems

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

6. Linear Quadratic Regulator Control

Principles of Optimal Control Spring 2008

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

EE363 homework 2 solutions

Robust Model Predictive Control

Numerical Methods for Model Predictive Control. Jing Yang

Structured State Space Realizations for SLS Distributed Controllers

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Semidefinite Programming Duality and Linear Time-invariant Systems

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

Homework Solution # 3

Appendix A Solving Linear Matrix Inequality (LMI) Problems

Lecture 2: Discrete-time Linear Quadratic Optimal Control

LMI Methods in Optimal and Robust Control

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

Constrained Linear Quadratic Optimal Control

Time-Invariant Linear Quadratic Regulators!

Lecture 5 Linear Quadratic Stochastic Control

Further results on Robust MPC using Linear Matrix Inequalities

Lecture 2 and 3: Controllability of DT-LTI systems

6.241 Dynamic Systems and Control

EE363 homework 7 solutions

Linear Quadratic Regulator (LQR) Design I

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

ESC794: Special Topics: Model Predictive Control

An LQ R weight selection approach to the discrete generalized H 2 control problem

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

ESC794: Special Topics: Model Predictive Control

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

OPTIMAL CONTROL WITH DISTURBANCE ESTIMATION

Lecture 4 Continuous time linear quadratic regulator

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Linear Quadratic Regulator (LQR) Design II

Control Systems Lab - SC4070 Control techniques

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Robotics. Control Theory. Marc Toussaint U Stuttgart

Topic # Feedback Control Systems

Linear Quadratic Regulator (LQR) I

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Enlarged terminal sets guaranteeing stability of receding horizon control

Module 07 Controllability and Controller Design of Dynamical LTI Systems

A Stable Block Model Predictive Control with Variable Implementation Horizon

Chapter 2 Optimal Control Problem

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Decentralized and distributed control

Robust Control 5 Nominal Controller Design Continued

IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS. Shumei Mu, Tianguang Chu, and Long Wang

MODEL PREDICTIVE CONTROL FUNDAMENTALS

Control Design. Lecture 9: State Feedback and Observers. Two Classes of Control Problems. State Feedback: Problem Formulation

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System

EE221A Linear System Theory Final Exam

An Introduction to Model-based Predictive Control (MPC) by

6.241 Dynamic Systems and Control

Course on Model Predictive Control Part II Linear MPC design

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Dynamic Model Predictive Control

Nonlinear Model Predictive Control Tools (NMPC Tools)

Lecture 20: Linear Dynamics and LQG

8 A First Glimpse on Design with LMIs

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Module 03 Linear Systems Theory: Necessary Background

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Module 08 Observability and State Estimator Design of Dynamical LTI Systems

Control System Design

CONSTRAINED MODEL PREDICTIVE CONTROL ON CONVEX POLYHEDRON STOCHASTIC LINEAR PARAMETER VARYING SYSTEMS. Received October 2012; revised February 2013

Automatic Control 2. Model reduction. Prof. Alberto Bemporad. University of Trento. Academic year

MODERN CONTROL DESIGN

Control Systems. Design of State Feedback Control.

On-off Control: Audio Applications

Extensions and applications of LQ

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

5. Observer-based Controller Design

Lecture Note #6 (Chap.10)

Module 9: State Feedback Control Design Lecture Note 1

EEE582 Homework Problems

ECE7850 Lecture 9. Model Predictive Control: Computational Aspects

Denis ARZELIER arzelier

LINEAR-QUADRATIC CONTROL OF A TWO-WHEELED ROBOT


A Tour of Reinforcement Learning The View from Continuous Control. Benjamin Recht University of California, Berkeley

Transcription:

4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk

4F3 Predictive Control - Lecture 2 p 2/23 References Predictive Control J M Maciejowski Predictive Control with Constraints, Prentice Hall UK, 2002 E F Camacho Model Predictive Control, Springer UK, 2 nd ed, 2003 Discrete Time Control G F Franklin et al Digital Control of Dynamic Systems, Adison Wesley 3 rd ed, 2003 K J Åstrom and B Wittenmark Computer Controlled Systems, Prentice Hall 3 rd ed, 2003

Standing Assumptions From now on, we will deal with the DT system: Assumptions: x(k + 1) = Ax(k) + Bu(k) y(k) = Cx(k) z(k) = Hx(k) (A, B) is stabilizable and (C, A) detectable C = I state feedback H = C all outputs/states are controlled variables Goal is to regulate the states around the origin No delays, disturbances, model errors, noise etc Final two assumptions will be relaxed in final section of course 4F3 Predictive Control - Lecture 2 p 3/23

4F3 Predictive Control - Lecture 2 p 4/23 Linear Quadratic Regulator (LQR) Problem Problem: Given an initial state x(0) at time k = 0, compute and implement an input sequence {u(0),u(1),,} that minimizes the infinite horizon cost function ( ) x(k) T Qx(k) + u(k) T Ru(k) k=0 The state weight Q 0 penalizes non-zero states The input weight R 0 penalizes non-zero inputs Usually Q and R are diagonal and positive definite

4F3 Predictive Control - Lecture 2 p 5/23 LQR Problems - Infinite Horizon The infinite horizon LQR problem has an infinite number of decision variables {u(0),u(1),,} A simple and closed form solution exists if Q is positive semidefinite (Q 0) R is positive definite (R 0) The pair (Q 1 2,A) is detectable (See Mini-tutorial 7 in JMM or the 4F2 course notes) We will solve a finite horizon version of the LQR problem with the same assumptions as above for use in RHC

4F3 Predictive Control - Lecture 2 p 6/23 Receding Horizon Control input output set point time k k + 1 time 1 Obtain measurement of current state x 2 Compute optimal finite horizon input sequence {u 0(x), u 1(x),,u N 1 (x)} 3 Implement first part of optimal input sequence κ(x) := u 0(x) 4 Return to step 1

4F3 Predictive Control - Lecture 2 p 6/23 Receding Horizon Control input output set point time k k + 1 time 1 Obtain measurement of current state x 2 Compute optimal finite horizon input sequence {u 0(x), u 1(x),,u N 1 (x)} 3 Implement first part of optimal input sequence κ(x) := u 0(x) 4 Return to step 1

4F3 Predictive Control - Lecture 2 p 6/23 Receding Horizon Control input output set point time k k + 1 time 1 Obtain measurement of current state x 2 Compute optimal finite horizon input sequence {u 0(x), u 1(x),,u N 1 (x)} 3 Implement first part of optimal input sequence κ(x) := u 0(x) 4 Return to step 1

4F3 Predictive Control - Lecture 2 p 6/23 Receding Horizon Control input output set point time k k + 1 time 1 Obtain measurement of current state x 2 Compute optimal finite horizon input sequence {u 0(x), u 1(x),,u N 1 (x)} 3 Implement first part of optimal input sequence κ(x) := u 0(x) 4 Return to step 1

4F3 Predictive Control - Lecture 2 p 7/23 Finite Horizon Optimal Control Problem: Given an initial state x = x(k), compute a finite horizon input sequence {u 0,u 1,,u N 1 } that minimizes the finite horizon cost function V (x, (u 0,,u N 1 )) = x T NPx N + N 1 ) (x Ti Qx i + u Ti Ru i i=0 where x 0 = x x i+1 = Ax i + Bu i, i = 0, 1,,N 1 Important: V ( ) is a function of the initial state x and the first N inputs u i, and not the time index k or the predicted states x i

4F3 Predictive Control - Lecture 2 p 8/23 Finite Horizon Optimal Control Some terminology: The vector x i is the prediction of x(k + i) given the current state x(k) and the inputs u(k +i) = u i for all i = 0, 1,,N 1 The integer N is the control horizon The matrix P R n n is the terminal weight, with P 0 The stability and performance of a receding horizon control law based on this problem is determined by the parameters Q, R, P and N

4F3 Predictive Control - Lecture 2 p 9/23 Some Notation Define the stacked vectors U R Nm and X R Nn as: U := u 0 u 1 u 2, X := x 1 x 2 x 3, u N 1 x N Note that u i R m and x i R n, and that x = x 0 = x(k) is known Can define stacked outputs Y R Np and controlled variables Z R Nq in a similar way

4F3 Predictive Control - Lecture 2 p 10/23 More Notation The cost function is defined as N 1 ) V (x,u) := x T NPx N + (x Ti Qx i + u Ti Ru i The value function is defined as i=0 V (x) := min U V (x,u) The optimal input sequence is defined as U (x) := argmin V (x,u) U =: {u 0(x),u 1(x),,u N 1(x)}

4F3 Predictive Control - Lecture 2 p 11/23 Derivation of RHC Law Compute prediction matrices Φ and Γ such that X = Φx + ΓU Rewrite the cost function V ( ) in terms of x and U Compute the gradient U V (x,u) Set U V (x,u) = 0 and solve for U (x) The RHC control law is the first part of this optimal sequence: ( ) u 0(x) = I m 0 0 U (x) When there are no constraints, it is possible to do this analytically

4F3 Predictive Control - Lecture 2 p 12/23 Constructing the Prediction Matrices Want to find matrices Φ and Γ such that X = Φx + ΓU: x 1 = Ax 0 + Bu 0 x 2 = Ax 1 + Bu 1 x 3 = Ax 2 + Bu 2

4F3 Predictive Control - Lecture 2 p 12/23 Constructing the Prediction Matrices Want to find matrices Φ and Γ such that X = Φx + ΓU: x 1 = Ax 0 + Bu 0 x 2 = Ax 1 + Bu 1 x 3 = Ax 2 + Bu 2 x 1 = Ax 0 + Bu 0 x 2 = A(Ax 0 + Bu 0 ) + Bu 1 = A 2 x 0 + ABu 0 + Bu 1 x 3 = A 3 x 0 + A 2 Bu 0 + ABu 1 + Bu 2 x N = A N x 0 + A N 1 Bu 0 + + ABu N 2 + Bu N 1

4F3 Predictive Control - Lecture 2 p 13/23 Constructing the Prediction Matrices Collect terms to get: x 1 A x 2 := A 2 x 0 + x N A N B 0 0 AB B 0 A N 1 B A N 2 B B u 0 u 1 u N 1 Recalling that x := x 0, the prediction matrices Φ and Γ are: Φ := A A 2 A N B 0 0, Γ := AB B 0 A N 1 B A N 2 B B

Constructing the Cost Function Recall that the cost function is V (x, U):= x T NPx N + N 1 i=0 (x Ti Qx i + u Ti Ru i ) = x T 0 Qx 0 + x 1 x 2 x N T Q Q Q P x 1 x 2 x N + u 0 u 1 u N 1 T R R R R u 0 u 1 u N 1 Recalling x := x 0, these can be rewritten in matrix form as V (x,u) = x T Qx + X T ΩX + U T ΨU Note that: P 0 and Q 0 Ω 0 R 0 Ψ 0 4F3 Predictive Control - Lecture 2 p 14/23

4F3 Predictive Control - Lecture 2 p 15/23 Constructing the Cost Function Recall that: V (x,u) = x T Qx + X T ΩX + U T ΨU X = Φx + ΓU V (x,u) = x T Qx + (Φx + ΓU) T Ω(Φx + ΓU) + U T ΨU = x T Qx + x T Φ T ΩΦx + U T Γ T ΩΓU + U T ΨU + x T Φ T ΩΓU + U T Γ T ΩΦx = x T (Q + Φ T ΩΦ)x + U T (Ψ + Γ T ΩΓ)U + 2U T Γ T ΩΦx

4F3 Predictive Control - Lecture 2 p 16/23 Solving for U (x) Recall that: V (x,u) = 1 2 UT GU + U T Fx + x T (Q + Φ T ΩΦ)x where G := 2(Ψ + Γ T ΩΓ) 0, ( Ψ 0 and Ω 0) F := 2Γ T ΩΦ Important: This is a convex and quadratic function of U The unique and global minimum occurs at the point where V U (x,u) = GU + Fx = 0 The optimal input sequence is therefore U (x) = G 1 Fx

4F3 Predictive Control - Lecture 2 p 17/23 Receding Horizon Control Law The optimal input sequence is: U (x) = G 1 Fx The RHC law is defined by the first part of U (x): ( ) u 0(x) = I m 0 0 U (x) Define: K rhc = ( ) I m 0 0 G 1 F so that u = K rhc x This is a time invariant linear control law It approximates the optimal infinite horizon control law

4F3 Predictive Control - Lecture 2 p 18/23 Alternative Formulations Many variations on our predictive control formulation exist: Could optimize over predicted input changes u i = u i u i 1 V ( ) is then a function of x(k) and u k 1 The decision variables are { u 0, u 1,, u N 1 } Allows penalties on rapid control fluctuations Could allow a larger horizon for prediction than for control Inputs are constrained at the end of the control horizon, eg u i = Kx i or u i = 0 for all i N

4F3 Predictive Control - Lecture 2 p 19/23 Stability in Predictive Control Warning: The RHC law u(k) = K rhc x(k) might not be stabilizing Stability of the RHC law depends on the proper choice of the parameters Q, R, P and N It is not obvious how to do this Example: Consider the following open-loop unstable system with 2 states and 1 input: ( ) ( ) 1216 0055 002763 x(k + 1) = x(k) + 0221 09947 0002763 u(k)

4F3 Predictive Control - Lecture 2 p 20/23 Stability in Predictive Control Fix Q = P = I Compute ρ(a + BK rhc ) for various R and N: Vary N = 1,2,,50 Vary R = 0,002,,10 Plot: Black stable White unstable Not all values of R and N guarantee a stable closed loop we will return to this in Lecture 5

4F3 Predictive Control - Lecture 2 p 21/23 Equivalence between RHC and LQR The solution to the infinite horizon LQR problem is: where u(k) = K lqr x(k) K lqr = (B T PB + R) 1 B T PA and P is the solution to the Algebraic Riccati Equation (ARE): P = A T PA A T PB(B T PB + R) 1 B T PA + Q See 4F2 notes or JMM mini-tutorial 7 If the terminal weight P in the finite horizon cost function V ( ) is a solution to the ARE above, then: K rhc = K lqr

4F3 Predictive Control - Lecture 2 p 22/23 Implementation of the RHC Law K rhc u(k) DT System x(k) The matrix K rhc is a time-invariant, linear state feedback gain The RHC law is given by u(k) = K rhc x(k) Important: For our unconstrained problem, the control law can be written in closed-form

4F3 Predictive Control - Lecture 2 p 23/23 Output Feedback Predictive Control K rhc u(k) DT System y(k) ˆx(k k) Observer Controller If C I output feedback Use an observer to provide estimates ˆx(k k) RHC law is the same with x(k) replaced by ˆx(k k)