ESC794: Special Topics: Model Predictive Control
|
|
- Amelia Robbins
- 5 years ago
- Views:
Transcription
1 ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department Cleveland State University
2 Nonlinear MPC for Constant References Here we consider equilibrium regulation of a nonlinear system under constraints. Consider the open-loop system to be x + = f(x,u) and the MPC feedback law u = µ(x). The resulting closed-loop system is x + = g(x) = f(x,µ(x)) and we assume that x is an equilibrium for this system, that is, g(x ) = x. Also, assume that there is no running cost associated with equilibrium-holding: l(x,µ(x )) = 0. Finally, assume l(x,u) > 0 for (x,u) (x,µ(x )). 2 / 23
3 Optimal Control Problem (OCP N ) 3.1 and NMPC Algorithm minimize u U N (x 0 ) subject to J N (x 0,u) = N 1 k=0 l(x u (k,x 0 ),u(k)) x u (0,x 0 ) = x 0 x u (k +1,x 0 ) = f(x u (k,x 0 ),u(k)) Let u (k) be the open-loop solution sequence for OCP N. The MPC feedback law is defined by µ N (x(n)) = u (0). Note that n is the instant at which OCP N is solved, using x 0 = x(n) (the feedback). The nominal closed-loop system resulting from this algorithm is x + = f(x,µ N (x)) 3 / 23
4 Constraints and Related Definitions Recalling the notational elements introduced in Handout 1, we consider X to be the set of admissible states, decided by us as designers. U(x) is the set of admissible inputs for state x. Also specified by designers, and the simplest case is when there s no dependence on x. Define the set of admissible pairs by Y = {(x,u) : x X and u U(x)} Let n N and x 0 X. A control sequence u U N and the resulting trajectory x u (k,x 0 ) are admissible for x 0 up to N if (x u (k,x 0 ),u(k)) Y, k = 0,1,2,...N 1 and x u (N,x 0 ) X We need this separate condition on the last state because control sequences are 1 step shorter than resulting trajectories. The set of all admissible control sequences for x 0 up to N is denoted by U N (x 0 ) 4 / 23
5 Constraints and Related Definitions... Let x 0 X and u U. The control sequence and corresponding trajectory x u (k,x 0 ) are called admissible for x 0 if they are admissible for any arbitrary n N. The set of all admissible sequences for x 0 is denoted U (x 0 ) A feedback law µ : N 0 X U is called admissible if µ(n,x) U 1 (x) for all x X and all n N Viability: We assume that x X, there is always some u U(x) such that f(x,u) X (there s always some admissible control to apply that will not result in a state constraint violation next). 1 Normally we require the entire predicted sequences out of the OCP N to be admissible, even though only the first control sequence element will be applied. Not much else we can do! 5 / 23
6 Viability... A car (point mass m) is situated between 2 walls separated by a distance d. The maximum acceleration and deceleration of the car are captured by the constraint u Ū. The maximum allowable speed is V. Suppose the car obeys the double-integrator law mẍ = u. Sketch the viable subset of X = R 2. Note: G & P and other recent approaches to NMPC analysis make state admissibility a part of input admissibility. When we require u U N (x 0 ) not only we enforce input value constraints, but also that the resulting state trajectories are in X up to time N. This shifts the burden of preserving state constraints to the numerical solvers, avoiding theoretical complications. We just assume we have viability, it s up to the solver to find the best viable solution. 6 / 23
7 Admissibility of the NMPC Feedback Law This theorem shows that NMPC feedback will generate admissible input sequences and corresponding trajectories provided viability is assumed. (At every point x, there s always some control u that may be applied without violating X next. NMPC simply chooses u = µ N (x) among them). Theorem G&P 3.5: Consider the OCP N 3.1 with constraints u U N (x 0 ). Suppose viability holds. Consider the nominal closed-loop system x + = f(x,µ N (x 0,x)) with µ N (x(n)) = u (0) and suppose x 0 = x µn (0) X. Then for all n N. (x µn (n),µ N (x µn (n)) Y This key result leads to the recursive feasibility property, because the assumption x 0 X will be automatically satisfied upon subsequent applications of the MPC feedback law, as a consequence of this very same theorem. 7 / 23
8 Time-Varying Optimal Control Problem (OCP n N ) For a time-varying reference x ref (n), the running cost l(n,x,u) is assumed to satisfy l(n,x ref (n),u ref (n)) = 0 where x ref (n) has been generated by a suitable u ref (n): x ref (n+1) = f(x ref (n),u ref (n)) Also, the running cost must be non-negative: l(n,x,u) > 0 n, u U, x X,x x ref (n) A running cost that satisfies the above is l(n,x,u) = x x ref (n) 2 +λ u u ref (n) 2 with λ 0. 8 / 23
9 Time-Varying NMPC Algorithm Measure x(n), set x 0 = x(n) and solve minimize u U N (x 0 ) subject to J N (n,x 0,u) = N 1 k=0 l(n+k,x u (k,x 0 ),u(k)) x u (0,x 0 ) = x 0 x u (k +1,x 0 ) = f(x u (k,x 0 ),u(k)) Let u (k) be the open-loop solution sequence for OCP N. The MPC feedback law is defined by µ N (n,x(n)) = u (0). The nominal closed-loop system resulting from this algorithm is x + = f(x,µ N (n,x)) 9 / 23
10 Terminal Constraint Sets Terminal constraint sets provide a way to guarantee feasibility and closed-loop stability of NMPC. For a fixed terminal set X 0, a general terminal constraint is expressed as x u (N,x(n)) X 0 for each u U N (x 0 ). In words, we re asking admissible predicted sequences to yield a predicted state at the end of the horizon such that it lies on a desired set X 0. The terminal set may not be fixed, but walk with time: X 0 (n). In this case, the terminal constraint has the form x u (N,x(n)) X 0 (n+n) Terminal sets are used to define feasible sets (of initial conditions), denoted X N : X N = {x 0 X : u U N (x 0 ) such that x u (N,x 0 ) X 0 } The corresponding admissible control sequences available for x 0 are: U N X 0 (x 0 ) = {u U N (x 0 ) : x u (N,x 0 ) X 0 } Similar definitions apply to the T-V case, see Def. 3.9 ii in G & P. 10 / 23
11 Terminal Costs and Weighted Costs - Everything algorithm The predicted state at the end of the horizon may be included as separate term in the cost function, F(x u (N,x(n)). Again, will be used as part of stability analysis. Weighted costs are generated by using a sequence of non-negative weights ω k. A T-V algorithm with weighted cost, terminal constraint and terminal cost ( everything algorithm) is: At time n, set x 0 = x(n) X N and solve minimize u U N X 0 (n,x 0 ) subject to J N (n,x 0,u) = N 1 k=0 ω k l(n+k,x u (k,x 0 ),u(k))+f(n+n,x u (N,x(n)) x u (0,x 0 ) = x 0 x u (k +1,x 0 ) = f(x u (k,x 0 ),u(k)) Note that the terminal state constraint is part of the admissibility requirement for u, and thus not listed as a subject to constraint. But when coding for numerical solutions, terminal constraints are listed among the subject to constraints. 11 / 23
12 Recursive Feasibility of NMPC Consider the everything algorithm. The following holds (Corollary 3.13 in G & P) for all n N: x X N (n) f(x,µ N (n,x)) X N 1 (n+1) At any time n, if we solve the NMPC problem at some point of the current N-step feasible set, then x + under NMPC feedback will belong to the next N 1-step feasible set. When X N is constant and contains just a single equilibrium point, it is clear that N 1-step feasibility implies N-step feasibility, since we may just apply the equilibrium control input again. 12 / 23
13 Bellman s Optimality Principle Suppose a driver has been following a bad route due to confusion. When he realizes the mistake, he will take the best route from where he is, regardless of what he did before. If he had started from that point, he would have followed that same best route. This empirical observation is reflected in Bellman s principle of optimality and will be presented in precise mathematical form as the MPC dynamic programming equality and inequalities. 13 / 23
14 Optimal Value Function and Optimal Sequences Define the optimal value function (or minimum cost) by V N (n,x 0 ) = inf u U N X 0 (n,x 0 ) J N (n,x 0,u) We use inf instead of min because there may be no (admissible) u that gives V N exactly. For example, the function e t has an infimum value at zero, but there s no t that produces zero. A control sequence u U N X 0 (x 0 ) is optimal if it actually achieves the optimal value: J N (n,x 0,u ) = V N (n,x 0 ) 14 / 23
15 Dynamic Programming Principle (Th in G & P): For OCP n N,e 2 with x 0 X N (n) and all n,n N 0 : i.) V N (n,x 0 ) = inf u U K X N K (n,x 0 ) { K 1 k=0 + V N K (n+k,x u (k,x 0 )) ω N k l(n+k,x u (k,x 0 ),u(k)) } Total opt cost (start at x 0 at time n with horizon N) = inf of (cost from x 0 at n with horizon K + opt cost from x u (K,x 0 ) at time n+k, horizon N K). Note that the terminal cost does not appear in the DP equation, but the principle is valid for OCP containing such term. 2 Important: this principle applies to the predicted solutions of the OCP, not their repeated use as feedback controls 15 / 23
16 Dynamic Programming Principle... ii) If an optimal sequence u U N X 0 (n,x 0 ) exists for x 0 then V N (n,x 0 ) = K 1 k=0 ω N k l(n+k,x u (k,x 0 ),u (k))+v N K (n+k,x u (k,x 0 )) time n: solve OCP get u at cost V N stage cost K 1 k=0 ω N kl(k,x u (k,x 0 ),u (k)) time n+k: solve OCP get the tail of u, at cost V N K x u (K,x 0 ) VN K(n+K,x u (k,x 0 )) cost-to-go x 0 16 / 23
17 Dynamic Programming Principle... (Corollary 3.16 in G &P): If u is an optimal solution to the OCP n N,e for x 0 X N (n) at time n with N 2, then for each K = 1,2,...N 1, the shifted sequence u K(k) = u (k +K), k = 0,1,2,N K 1 is an optimal solution to the OCP n N,e for x u (K,x 0) at time n+k, with horizon N K. (Theorem 3.17): Consider the OCP n N,e with x 0 X N (n) and assume an optimal sequence u exists. Then the NMPC feedback µ N (n,x 0 ) = u (0) satisfies µ N (n,x 0 ) = arg min {ω N l(n,x 0,u) u U 1 X (n,x 0 ) N 1 + V N 1 (n+1,f(x 0,u))} Note that the arg min is taken over all 1-element sequences admissible at time x 0 and time n. 17 / 23
18 Interpretation At x 0, n, try all admissible 1-element sequences u. Recall what admissibility entails (constrained controls, states, terminal state). Each u results in a 1-step stage cost, a next state f(x 0,u) and an optimal cost-to-go V N 1 (n+1,f(x 0,u)). Tally all admissible us and for each, add the stage cost and the cost-to-go. Locate the minimum sum. Th says that the u giving the minimum sum is the NMPC solution u (0). The next Corollary says the following: suppose we apply the NMPC feedback for some time. If we form a sequence with the applied feedbacks, it will be a solution to the one-shot OCP at the initial time. (Corollary 3.18): Consider the OCP n N,e with x 0 X and consider the set of admissible feedback laws µ N K for k = 0,1,...N / 23
19 Interpretation... Important: Unless N =, elements 2,3,4,...N of the predicted control and state sequences at time n do not match the feedback and closed-loop states at n+1,n+2,...n+n 1. Example (Prob. 3.2 in G&P): Consider the system x + 1 = x 1 +2x 2 x + 2 = x 2 +2u with running cost l(x,u) = u 2, x 0 = [0 0] T, x N = [4 0] T. For N = 4, use the dynamic programming principle to obtain the first predicted trajectory and optimal cost. We then use numerical simulations to illustrate the validity of the above theorems and colloraries. 19 / 23
20 Dynamic Programming Principle, LTI DT Systems and Finite-Horizon Quadratic Problem Consider the LTI DT system x + = Ax+Bu with initial state x(0) = x 0 and quadratic cost J N (x 0,u) = {x T u(k,x 0 )Qx u (k,x 0 )+u T (k)ru(k)}+x T u(n)q f x u (N) N 1 k=0 We use the DP principle to find a solution for the unconstrained OCP and corresponding optimal cost function. Standard solvability assumptions: Q = Q T 0, Q f = Q T f 0, R = R T > 0 20 / 23
21 Finite-Horizon DLQR... Apply the DP principle (Th 3.15) to this case with K = 1 and time n. Then the stage cost term is associated with the initial state and control: x T 0Qx 0 +u T Ru. With u as initial control, the next state is x + = Ax 0 +Bu. Then the cost-to-go to be minimized is V N 1 (Ax 0 +Bu). Then the DP principle reduces to V N (n,x 0 ) = x T { 0Qx 0 + min u T Ru+V N 1 (n+1,ax 0 +Bu) } u U 1 It is guessed that V N 1 (n+1,z) is a quadratic time-varying function: V N 1 (n+1,z) = z T P n+1 z where P n is a sequence of symmetric, p.def. matrices. Substituting the guess: V N (n,x 0 ) = x T 0Qx 0 + min u U 1{uT Ru+(Ax 0 +Bu) T P n+1 (Ax 0 +Bu)} 21 / 23
22 Finite-Horizon DLQR... Perform the indicated minimization by equating the gradient to zero: 2u T R+2(Ax 0 +Bu) T P n+1 B = 0 which gives the well-known optimal solution u = (R+B T P n+1 B) 1 B T P n+1 Ax 0 K n x 0 Substituting this solution into the DP principle equation gives the Riccati backward recursion: P n = Q+A T P n+1 A A T P n+1 B(R+B T P n+1 B) 1 B T P t+1 A To solve the above, note that P N = Q f. This is used as initial value to find P N 1,P N 2,...P 0 in that order, along with the optimal control sequence. 22 / 23
23 Example: Finite-Horizon DLQR Consider a double-integrator plant discretized with ZOH. We simulate the finite-horizon optimal regulator with identity weights. See the effect of N. 1 State Trajectory: Finite-Time DLQR x x 1 23 / 23
Chapter 3 Nonlinear Model Predictive Control
Chapter 3 Nonlinear Model Predictive Control In this chapter, we introduce the nonlinear model predictive control algorithm in a rigorous way. We start by defining a basic NMPC algorithm for constant reference
More informationIEOR 265 Lecture 14 (Robust) Linear Tube MPC
IEOR 265 Lecture 14 (Robust) Linear Tube MPC 1 LTI System with Uncertainty Suppose we have an LTI system in discrete time with disturbance: x n+1 = Ax n + Bu n + d n, where d n W for a bounded polytope
More informationMATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem
MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear
More informationESC794: Special Topics: Model Predictive Control
ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time
More informationECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects
ECE7850 Lecture 8 Nonlinear Model Predictive Control: Theoretical Aspects Model Predictive control (MPC) is a powerful control design method for constrained dynamical systems. The basic principles and
More informationHomework Solution # 3
ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found
More informationEE C128 / ME C134 Feedback Control Systems
EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of
More informationSuppose that we have a specific single stage dynamic system governed by the following equation:
Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 3 p 1/21 4F3 - Predictive Control Lecture 3 - Predictive Control with Constraints Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 3 p 2/21 Constraints on
More informationRegional Solution of Constrained LQ Optimal Control
Regional Solution of Constrained LQ Optimal Control José DeDoná September 2004 Outline 1 Recap on the Solution for N = 2 2 Regional Explicit Solution Comparison with the Maximal Output Admissible Set 3
More informationLecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case
Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case Dr. Burak Demirel Faculty of Electrical Engineering and Information Technology, University of Paderborn December 15, 2015 2 Previous
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More informationESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL
ESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL K. WORTHMANN Abstract. We are concerned with model predictive control without stabilizing terminal constraints or costs. Here, our
More informationarxiv: v2 [math.oc] 15 Jan 2014
Stability and Performance Guarantees for MPC Algorithms without Terminal Constraints 1 Jürgen Pannek 2 and Karl Worthmann University of the Federal Armed Forces, 85577 Munich, Germany, juergen.pannek@googlemail.com
More informationFormula Sheet for Optimal Control
Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive
More informationOptimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.
Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation
More informationNumerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini
Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33
More informationEnlarged terminal sets guaranteeing stability of receding horizon control
Enlarged terminal sets guaranteeing stability of receding horizon control J.A. De Doná a, M.M. Seron a D.Q. Mayne b G.C. Goodwin a a School of Electrical Engineering and Computer Science, The University
More informationDistributed and Real-time Predictive Control
Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency
More informationUCLA Chemical Engineering. Process & Control Systems Engineering Laboratory
Constrained Innite-Time Nonlinear Quadratic Optimal Control V. Manousiouthakis D. Chmielewski Chemical Engineering Department UCLA 1998 AIChE Annual Meeting Outline Unconstrained Innite-Time Nonlinear
More informationMPC: implications of a growth condition on exponentially controllable systems
MPC: implications of a growth condition on exponentially controllable systems Lars Grüne, Jürgen Pannek, Marcus von Lossow, Karl Worthmann Mathematical Department, University of Bayreuth, Bayreuth, Germany
More informationECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming
ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationAdaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees
Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees Pontus Giselsson Department of Automatic Control LTH Lund University Box 118, SE-221 00 Lund, Sweden pontusg@control.lth.se
More informationStatic and Dynamic Optimization (42111)
Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:
More informationChapter 5. Pontryagin s Minimum Principle (Constrained OCP)
Chapter 5 Pontryagin s Minimum Principle (Constrained OCP) 1 Pontryagin s Minimum Principle Plant: (5-1) u () t U PI: (5-2) Boundary condition: The goal is to find Optimal Control. 2 Pontryagin s Minimum
More informationOPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28
OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from
More informationROBUSTNESS OF PERFORMANCE AND STABILITY FOR MULTISTEP AND UPDATED MULTISTEP MPC SCHEMES. Lars Grüne and Vryan Gil Palma
DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS Volume 35, Number 9, September 2015 doi:10.3934/dcds.2015.35.xx pp. X XX ROBUSTNESS OF PERFORMANCE AND STABILITY FOR MULTISTEP AND UPDATED MULTISTEP MPC SCHEMES
More informationOutline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.
Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 217 by James B. Rawlings Outline 1 Linear
More information6.231 DYNAMIC PROGRAMMING LECTURE 9 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 9 LECTURE OUTLINE Rollout algorithms Policy improvement property Discrete deterministic problems Approximations of rollout algorithms Model Predictive Control (MPC) Discretization
More informationOptimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:
Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic
More informationA Tour of Reinforcement Learning The View from Continuous Control. Benjamin Recht University of California, Berkeley
A Tour of Reinforcement Learning The View from Continuous Control Benjamin Recht University of California, Berkeley trustable, scalable, predictable Control Theory! Reinforcement Learning is the study
More informationTheory in Model Predictive Control :" Constraint Satisfaction and Stability!
Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time
More informationLinear-Quadratic Optimal Control: Full-State Feedback
Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually
More informationIMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS
IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS D. Limon, J.M. Gomes da Silva Jr., T. Alamo and E.F. Camacho Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla Camino de los Descubrimientos
More informationTime-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationA Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1
A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 Ali Jadbabaie, Claudio De Persis, and Tae-Woong Yoon 2 Department of Electrical Engineering
More informationEfficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph
Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization
More informationTopic # Feedback Control Systems
Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the
More informationOn-off Control: Audio Applications
On-off Control: Audio Applications Graham C. Goodwin Day 4: Lecture 3 16th September 2004 International Summer School Grenoble, France 1 Background In this lecture we address the issue of control when
More informationCDS 110b: Lecture 2-1 Linear Quadratic Regulators
CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,
More informationHamilton-Jacobi-Bellman Equation Feb 25, 2008
Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS
More informationChapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More informationGeorgia Department of Education Common Core Georgia Performance Standards Framework CCGPS Advanced Algebra Unit 2
Polynomials Patterns Task 1. To get an idea of what polynomial functions look like, we can graph the first through fifth degree polynomials with leading coefficients of 1. For each polynomial function,
More informationUCLA Chemical Engineering. Process & Control Systems Engineering Laboratory
Constrained Innite-time Optimal Control Donald J. Chmielewski Chemical Engineering Department University of California Los Angeles February 23, 2000 Stochastic Formulation - Min Max Formulation - UCLA
More informationLecture 2: Discrete-time Linear Quadratic Optimal Control
ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon
More informationA Stable Block Model Predictive Control with Variable Implementation Horizon
American Control Conference June 8-,. Portland, OR, USA WeB9. A Stable Block Model Predictive Control with Variable Implementation Horizon Jing Sun, Shuhao Chen, Ilya Kolmanovsky Abstract In this paper,
More informationModel predictive control of industrial processes. Vitali Vansovitš
Model predictive control of industrial processes Vitali Vansovitš Contents Industrial process (Iru Power Plant) Neural networ identification Process identification linear model Model predictive controller
More informationEconomic MPC using a Cyclic Horizon with Application to Networked Control Systems
Economic MPC using a Cyclic Horizon with Application to Networked Control Systems Stefan Wildhagen 1, Matthias A. Müller 1, and Frank Allgöwer 1 arxiv:1902.08132v1 [cs.sy] 21 Feb 2019 1 Institute for Systems
More informationNonlinear Model Predictive Control Tools (NMPC Tools)
Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More information1 Steady State Error (30 pts)
Professor Fearing EECS C28/ME C34 Problem Set Fall 2 Steady State Error (3 pts) Given the following continuous time (CT) system ] ẋ = A x + B u = x + 2 7 ] u(t), y = ] x () a) Given error e(t) = r(t) y(t)
More informationAsymptotic stability and transient optimality of economic MPC without terminal conditions
Asymptotic stability and transient optimality of economic MPC without terminal conditions Lars Grüne 1, Marleen Stieler 2 Mathematical Institute, University of Bayreuth, 95440 Bayreuth, Germany Abstract
More information1 Basics of probability theory
Examples of Stochastic Optimization Problems In this chapter, we will give examples of three types of stochastic optimization problems, that is, optimal stopping, total expected (discounted) cost problem,
More informationPrashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles
HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationLyapunov Stability Theory
Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous
More informationSuboptimality of minmax MPC. Seungho Lee. ẋ(t) = f(x(t), u(t)), x(0) = x 0, t 0 (1)
Suboptimality of minmax MPC Seungho Lee In this paper, we consider particular case of Model Predictive Control (MPC) when the problem that needs to be solved in each sample time is the form of min max
More informationEconomic Nonlinear Model Predictive Control
Economic Nonlinear Model Predictive Control Timm Faulwasser Karlsruhe Institute of Technology (KIT) timm.faulwasser@kit.edu Lars Grüne University of Bayreuth lars.gruene@uni-bayreuth.de Matthias A. Müller
More informationTime-Invariant Linear Quadratic Regulators!
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationA SIMPLE TUBE CONTROLLER FOR EFFICIENT ROBUST MODEL PREDICTIVE CONTROL OF CONSTRAINED LINEAR DISCRETE TIME SYSTEMS SUBJECT TO BOUNDED DISTURBANCES
A SIMPLE TUBE CONTROLLER FOR EFFICIENT ROBUST MODEL PREDICTIVE CONTROL OF CONSTRAINED LINEAR DISCRETE TIME SYSTEMS SUBJECT TO BOUNDED DISTURBANCES S. V. Raković,1 D. Q. Mayne Imperial College London, London
More informationLINEAR-CONVEX CONTROL AND DUALITY
1 LINEAR-CONVEX CONTROL AND DUALITY R.T. Rockafellar Department of Mathematics, University of Washington Seattle, WA 98195-4350, USA Email: rtr@math.washington.edu R. Goebel 3518 NE 42 St., Seattle, WA
More informationSecond Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers
2 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 2 WeA22. Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers Christos Gavriel
More informationStability analysis of constrained MPC with CLF applied to discrete-time nonlinear system
. RESEARCH PAPER. SCIENCE CHINA Information Sciences November 214, Vol. 57 11221:1 11221:9 doi: 1.17/s11432-14-5111-y Stability analysis of constrained MPC with CLF applied to discrete-time nonlinear system
More informationEML5311 Lyapunov Stability & Robust Control Design
EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.
More informationAn Introduction to Model-based Predictive Control (MPC) by
ECE 680 Fall 2017 An Introduction to Model-based Predictive Control (MPC) by Stanislaw H Żak 1 Introduction The model-based predictive control (MPC) methodology is also referred to as the moving horizon
More informationAutonomous navigation of unicycle robots using MPC
Autonomous navigation of unicycle robots using MPC M. Farina marcello.farina@polimi.it Dipartimento di Elettronica e Informazione Politecnico di Milano 7 June 26 Outline Model and feedback linearization
More informationQuadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
.. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if
More information1. Type your solutions. This homework is mainly a programming assignment.
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K S # 6 + 7 Ahmad F. Taha October 22, 2015 READ Homework Instructions: 1. Type your solutions. This homework
More informationDenis ARZELIER arzelier
COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15
More informationStability and feasibility of state-constrained linear MPC without stabilizing terminal constraints
Stability and feasibility of state-constrained linear MPC without stabilizing terminal constraints Andrea Boccia 1, Lars Grüne 2, and Karl Worthmann 3 Abstract This paper is concerned with stability and
More informationTrajectory-based optimization
Trajectory-based optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 6 1 / 13 Using
More informationAnalysis and design of unconstrained nonlinear MPC schemes for finite and infinite dimensional systems
Analysis and design of unconstrained nonlinear MPC schemes for finite and infinite dimensional systems Lars Grüne Mathematisches Institut Universität Bayreuth 9544 Bayreuth, Germany lars.gruene@uni-bayreuth.de
More informationTube Model Predictive Control Using Homothety & Invariance
Tube Model Predictive Control Using Homothety & Invariance Saša V. Raković rakovic@control.ee.ethz.ch http://control.ee.ethz.ch/~srakovic Collaboration in parts with Mr. Mirko Fiacchini Automatic Control
More informationDissipativity. Outline. Motivation. Dissipative Systems. M. Sami Fadali EBME Dept., UNR
Dissipativity M. Sami Fadali EBME Dept., UNR 1 Outline Differential storage functions. QSR Dissipativity. Algebraic conditions for dissipativity. Stability of dissipative systems. Feedback Interconnections
More informationLecture 5 Linear Quadratic Stochastic Control
EE363 Winter 2008-09 Lecture 5 Linear Quadratic Stochastic Control linear-quadratic stochastic control problem solution via dynamic programming 5 1 Linear stochastic system linear dynamical system, over
More informationOn the Inherent Robustness of Suboptimal Model Predictive Control
On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical & Biological Engineering Computer
More informationLinear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters
Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationFurther results on Robust MPC using Linear Matrix Inequalities
Further results on Robust MPC using Linear Matrix Inequalities M. Lazar, W.P.M.H. Heemels, D. Muñoz de la Peña, T. Alamo Eindhoven Univ. of Technology, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands,
More informationOptimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax
More informationLearning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System
Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Ugo Rosolia Francesco Borrelli University of California at Berkeley, Berkeley, CA 94701, USA
More informationTopic # Feedback Control
Topic #7 16.31 Feedback Control State-Space Systems What are state-space models? Why should we use them? How are they related to the transfer functions used in classical control design and how do we develop
More informationOptimal control of nonlinear systems with input constraints using linear time varying approximations
ISSN 1392-5113 Nonlinear Analysis: Modelling and Control, 216, Vol. 21, No. 3, 4 412 http://dx.doi.org/1.15388/na.216.3.7 Optimal control of nonlinear systems with input constraints using linear time varying
More informationExam. 135 minutes, 15 minutes reading time
Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.
More informationOptimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints
Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints 1 Objectives Optimization of functions of multiple variables subjected to equality constraints
More informationMCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory
MCE/EEC 647/747: Robot Dynamics and Control Lecture 8: Basic Lyapunov Stability Theory Reading: SHV Appendix Mechanical Engineering Hanz Richter, PhD MCE503 p.1/17 Stability in the sense of Lyapunov A
More informationCE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review
CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationProfessor Fearing EE C128 / ME C134 Problem Set 2 Solution Fall 2010 Jansen Sheng and Wenjie Chen, UC Berkeley
Professor Fearing EE C128 / ME C134 Problem Set 2 Solution Fall 21 Jansen Sheng and Wenjie Chen, UC Berkeley 1. (15 pts) Partial fraction expansion (review) Find the inverse Laplace transform of the following
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationNumerical Optimal Control Overview. Moritz Diehl
Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize
More informationMPC Infeasibility Handling
MPC Handling Thomas Wiese, TU Munich, KU Leuven supervised by H.J. Ferreau, Prof. M. Diehl (both KUL) and Dr. H. Gräb (TUM) October 9, 2008 1 / 42 MPC General MPC Strategies 2 / 42 Linear Discrete-Time
More informationModel Predictive Control Short Course Regulation
Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 2017 by James B. Rawlings Milwaukee,
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :
More informationNOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained
NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume
More informationOptimal Control Theory
Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationCourse on Model Predictive Control Part III Stability and robustness
Course on Model Predictive Control Part III Stability and robustness Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria,
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More information