Explicit constrained nonlinear MPC

Similar documents
OPTIMAL CONSTRAINED CONTROL ALLOCATION IN MARINE SURFACE VESSELS WITH RUDDERS. Tor A. Johansen Λ Thomas P. Fuglseth Λ Petter Tøndel Λ Thor I.

Complexity Reduction in Explicit MPC through Model Reduction

Explicit Approximate Model Predictive Control of Constrained Nonlinear Systems with Quantized Input

Linear Model Predictive Control via Multiparametric Programming

Robust Explicit MPC Based on Approximate Multi-parametric Convex Programming

Explicit Robust Model Predictive Control

Numerical Optimization

OPTIMAL CONSTRAINED CONTROL ALLOCATION IN MARINE SURFACE VESSELS WITH RUDDERS. Tor A. Johansen Thomas P. Fuglseth Petter Tøndel Thor I.

Example of Multiparametric Solution. Explicit Form of Model Predictive Control. via Multiparametric Programming. Finite-Time Constrained LQR

EE C128 / ME C134 Feedback Control Systems

Constraint qualifications for nonlinear programming

Piecewise-affine functions: applications in circuit theory and control

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

A new low-and-high gain feedback design using MPC for global stabilization of linear systems subject to input saturation

Course on Model Predictive Control Part II Linear MPC design

GLOBAL STABILIZATION OF THE INVERTED PENDULUM USING MODEL PREDICTIVE CONTROL. L. Magni, R. Scattolini Λ;1 K. J. Åström ΛΛ

The Explicit Solution of Model Predictive Control via Multiparametric Quadratic Programming

Improved MPC Design based on Saturating Control Laws

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects

Optimality Conditions for Constrained Optimization

4F3 - Predictive Control

Hybrid Systems Course Lyapunov stability

Lecture 13: Constrained optimization

5 Handling Constraints

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

Robustly stable feedback min-max model predictive control 1

Optimal and suboptimal event-triggering in linear model predictive control

Prediktivno upravljanje primjenom matematičkog programiranja

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

Generalization to inequality constrained problem. Maximize

CONSTRAINED NONLINEAR PROGRAMMING

LINEAR TIME VARYING TERMINAL LAWS IN MPQP

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

minimize x subject to (x 2)(x 4) u,

Convex Optimization Boyd & Vandenberghe. 5. Duality

5. Duality. Lagrangian

4F3 - Predictive Control

Constrained Optimization Theory

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Distributed and Real-time Predictive Control

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Computation, Approximation and Stability of Explicit FeedbackMin-MaxNonlinearModelPredictiveControl

Chapter 2 Optimal Control Problem

ESC794: Special Topics: Model Predictive Control

ECE7850 Lecture 9. Model Predictive Control: Computational Aspects

ESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL

Interior-Point Methods for Linear Optimization

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

MPC Infeasibility Handling

Switched systems: stability

Mathematical Economics. Lecture Notes (in extracts)

A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra

Optimality, Duality, Complementarity for Constrained Optimization

Lecture: Duality.

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

Hybrid Systems - Lecture n. 3 Lyapunov stability

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Modeling & Control of Hybrid Systems Chapter 4 Stability

Convergence Rate of Nonlinear Switched Systems

Continuous-time linear MPC algorithms based on relaxed logarithmic barrier functions

EFFICIENT MODEL PREDICTIVE CONTROL WITH PREDICTION DYNAMICS

Lecture: Duality of LP, SOCP and SDP

On robustness of suboptimal min-max model predictive control *

Convex Optimization M2

X X 1. x (t,x0,u y ) x2 (t,x0,u y ) x-space x u(t)

Lecture 18: Optimization Programming

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Stochastic Tube MPC with State Estimation

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

The Partial Enumeration Method for Model Predictive Control: Algorithm and Examples

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

3.10 Lagrangian relaxation

Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Predictive control for general nonlinear systems using approximation

The servo problem for piecewise linear systems

Support Vector Machine (SVM) and Kernel Methods

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Academic Editor: Dominique Bonvin Received: 26 November 2016; Accepted: 13 February 2017; Published: 27 February 2017

Nonlinear Model Predictive Control for Periodic Systems using LMIs

Distributed Receding Horizon Control of Cost Coupled Systems

Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees

Information Structures Preserved Under Nonlinear Time-Varying Feedback

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

Algorithms for Constrained Optimization

Postface to Model Predictive Control: Theory and Design

A Stable Block Model Predictive Control with Variable Implementation Horizon

12. Interior-point methods

1 Computing with constraints

Transcription:

Tor A. Johansen Norwegian University of Science and Technology, Trondheim, Norway Main topics ffl Approximate explicit nonlinear MPC based on orthogonal search trees ffl Approximate explicit nonlinear MPC based on local mp-qp solutions ffl Discussion and summary 1

Part I Approximate explicit constrained nonlinear MPC based on orthogonal search trees ffl Nonlinear Constrained MPC formulation ffl Theory: Stability, optimality, convexity ffl Approximate mp-nlp via search tree partitioning ffl Example 2

+ jjx t+njt jj 2 P MPC formulation Nonlinear system: x(t + 1) = f(x(t);u(t)) Optimization problem (similar to Chen and Allgöwer (1998)): N 1 X = J(U;x(t)) @ 1 A jjxt+kjt jj 2 Q + jju t+kjj 2 R Λ (x(t)) = min V U with U = fu t ;u t+1 ; :::; u t+n 1 g subject to x tjt = x(t) and k= y min» y t+kjt» y max ; k = 1; :::; N u min» u t+k» u max ; k = ; 1; :::; N 1; x t+njt 2 Ω x t+k+1jt = f(x t+kjt ;u t+k ); k = ; 1; :::; N 1 y t+kjt = Cx t+kjt ; k = 1; 2; :::; N 3

MPC formulation, assumptions A1. P; Q; R χ. A2. y min < < y max and u min < < u max. A3. The function f is twice continuously differentiable, with f(; ) =. The compact and convex terminal set Ω is defined by = fx 2 R n j x T Px» ffg Ω P 2 R where and ff > will be specified shortly. n n 4

= @f A (; ); B @x @f (; ) @u = Terminal set Similar to Chen and Allgöwer (1998): A4. (A; B) is stabilizable, where Let K denote the associated LQ optimal gain matrix, such that A = A BK is strictly Hurwitz. Lemma 1. If» > is such that A +»I is strictly Hurwitz, the Lyapunov equation +»I) T P (A +»I) P = Q K T RK (A has a unique P χ solution. 5

Terminal set, cont d Lemma 2. Let P satisfy the conditions in Lemma 1. Then there exists a constant ff > such that Ω = fx 2 R n j x T Px» ffg satisfies 1. Ω ρ C = fx 2 R n j u min» Kx» u max ;y min» Cx» y max g. 2. The autonomous nonlinear system x(t + 1) = f(x(t); Kx(t)) is asymptotically stable for all x() 2 Ω, i.e. Ω is positively invariant. 3. The infinite-horizon cost for the system in 2, given by 1X jjxt+kjt jj 2 Q + jjkx t+kjt jj2 R J 1 (x(t)) = satisfies J 1 (x)» x T Px for all x 2 Ω. k= 6

Multi-parametric nonlinear program (mp-nlp) This and similar optimization problems can be formulated in a concise form Λ (x) = min V J(U;x) subject to G(U;x)» U This defines an mp-nlp, since it is an NLP U in parameterized x = by x(t). Define the set of N-step feasible initial states as follows X F = fx 2 R n jg(u;x)» for some U 2 R Nm g We seek an explicit state feedback representation of the solution function U Λ (x) to this problem: Implementation without real-time optimization! 7

How to represent U Λ (x) and X F? mp-lp/mp-qp: Exact piecewise linear (PWL) solution, on a polyhedral partition of X F. mp-nlp: No structural properties that can be exploited to determine a finite representation of the exact solution. Keep the PWL representation as an approximation, as it is convenient to work with! 8

Orthogonal partitioning via a search tree PWL approximation is flexible and leads to highly efficient real-time search. How to construct the partition, and local linear approximations? 9

PWL approximate mp-nlp approach How to construct the partition, and local linear approximations? Construct feasible local linear approximation to the mp-nlp solution from NLP solutions at a finite number of samples in every region of the partition. Construct the partition such that the approximate solution has a cost close to the optimal cost for all x 2 X F. This is easier under convexity assumptions. 1

mp-nlp and convexity A5. J and G are jointly convex for all (U;x) 2 U X F, where U = [u min ;u max ] N is the set of admissible inputs. The optimal cost function V Λ can now be shown to have some regularity properties (Mangasarian and Rosen, 1964): Theorem X 1. F is a closed convex set, V and : X Λ! R F is convex and 2 continuous. Convexity X of F V and is a direct consequence of A5, while continuity of V Λ can be established under weaker conditions (Fiacco, 1983). Λ 11

Optimal cost function bounds Consider the V = fv vertices ;v 2 ; :::; v M g 1 of any bounded X polyhedron F. Define the affine function V (x) = V x + l as the solution to the X following LP: min V v + l subject to V v i + l V Λ (v i ); for all i 2 f1; 2;:::;Mg ;l V Likewise, define the convex PWL function V Λ (vi ) + r T V Λ (v i )(x v i ) V (x) = max i2f1;2;:::;mg Theorem 2. Consider any bounded polyhedron X X F. Then V (x)» V Λ (x)» V (x) for all x 2 X. 2 These bounds can be computed by solving M NLPs. 12

v i + g ;v i ) V Λ (v i ) + fijjk v i + g U Λ (v i )jj 2 2 J(K Feasible local linear approximation Similar to Bemporad and Filippi (23): Lemma 3. Consider any bounded polyhedron X X F with vertices fv 1 ;v 2 ; ::::; v M g. If K and g solve the convex NLP (fi arbitrary): MX min ;g K subject to G(K v i + g ;v i )» ; i=1 then ^U (x) = K x + g is feasible for the mp-nlp for all x 2 X. 2 i 2 f1; 2;:::;Mg Can be computed from the solution of the same NLPs as above... 13

min " ^V (x) + V (x) = x2x Sub-optimality bound Since ^U (x) defined in Lemma 3 is feasible in X, it follows that is an upper bound on V Λ (x) in X such that for all x 2 X ^V (x) = J( ^U (x);x) where» ^V (x) V Λ (x)» " 14

Approximate mp-nlp algorithm 1. Initialize the partition to the whole hypercube, i.e. P = fx g. Mark the hypercube X as unexplored. 2. Select any unexplored hypercube X 2 P. If no such hypercube exists, the algorithm terminates successfully. 3. Solve the NLP for x fixed to each of the vertices of the hypercube X (some of these NLPs may have been solved in earlier steps). If all solutions are feasible, go to step 4. Otherwise, compute the size of X. If it is smaller than some tolerance, mark X explored and infeasible. Otherwise, go to step 8. 4. If 2 X, choose ^U (x) = Kx and go to step 5. Otherwise, go to step 6. 5. If X Ω, mark X as explored and go to step 2. Otherwise, go to step 8. 6. Compute a feasible affine state feedback ^U, as an approximation to be used in X.Ifno feasible solution was found, go to step 8. 7. Compute the error bound ".If"» ", mark X as explored and feasible, and go to step 2. 8. Split the hypercube X into two hypercubes X 1 and X 2 using some heuristic rule. Mark both unexplored, remove X from P, add X 1 and X 2 to P and go to step 2. 15

1 = x 2 + u(1 + x 1 )=2 _x 2 = x 1 + u(1 4x 2 )=2 _x ψ 11:5926 16:5926 16:5926 11:5926 Example, Chen and Allgöwer (1998) The origin is an unstable equilibrium point, with a stabilizable linearization. We discretize this system using a sampling interval T s = :1. The control objective is given by the weighting matrices = T s Q I 2 2; R = T s 2 The terminal penalty is given by the solution to the Lyapunov equation! = P and a positively invariant terminal region is = fx 2 R 2 j x T Px» :7g Ω The prediction horizon N = 1:5=T is = s 15. 16

Example, partition with 15 regions 1.8.6.4.2 x 2.2.4.6.8 1 1.8.6.4.2.2.4.6.8 1 x 1 Real-time computations: 14 arithmetic operations per sample. 17

Example, solution and cost 2 2 1 1 u * (x 1,x 2 ) 1 u(x 1,x 2 ) 1 2 1 2 1 1.5 1.5.5.5.5.5.5.5 1 1 x 2 1 1 x 2 x 1 x 1 Optimal solution u Λ (x) to the left and approximate solution ^u(x) to the right. 5 5 4 4 V * (x 1,x 2 ) 3 2 V(x 1,x 2 ) 3 2 1 1 1 1 1 1.5.5.5.5.5.5.5.5 1 1 1 1 x1 x 2 x1 x 2 Optimal cost function V Λ (x) to the left and sub-optimal cost ^V (x) to the right. 18

Example, simulations.1 2.5.1.2.3.1.2.3 2 1.5 x 1 (t).4.5.6.7 x 2 (t).4.5.6.7.8 u(t) 1.5.8 1 2 3 4 5 6 7 8 9 1 t.9 1 2 3 4 5 6 7 8 9 1 t 1 2 3 4 5 6 7 8 9 1 t 19

Properties of the algorithm The PWL approximation generated is denoted ^U : X! R Nm, where X is the union of hypercubes where a feasible solution has been found. It is an inner approximation to X F and the approximation accuracy is determined by the tolerance in step 3. Theorem 3. Assume the partitioning rule in step 8 guarantees that the error decreases by some minimum amount or factor at each split. The algorithm terminates with an approximate solution function ^U that is feasible and satisfies for all x 2 X. 2» J( ^U(x);x) V Λ (x)» " 2

Stability The exact MPC will make the origin asymptotically stable (Chen and Allgöwer, 1998). We show below that asymptotic stability is inherited by the approximate MPC under an assumption on the tolerance ": A6. Assume the partition P generated by Algorithm 1 has the property that for any hypercube X 2 P that does not contain the origin where fl 2 (; 1) is given.» min " jjxjj fl 2 P x2x Theorem 4. The origin is an asymptotically stable equilibrium point, for all x() 2 X. 2 21

Non-convex problems If convexity does not hold, global optimization is generally needed if theoretical guarantees are required: 1. The NLPs must be solved using global optimization in steps 3 and 6. 2. The computation of the error bound " in step 7 must rely on global optimization, or convex underestimation. 3. The heuristics in step 8 may be modified to be efficient also in the nonconvex case. 22

Key references T. A. Johansen, Approximate Explicit Receding Horizon Control of Constrained Nonlinear Systems, submitted for publication, 22 H. Chen and F. Allgöwer, A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability, Automatica, Vol. 34, pp. 125-1217, 1998 A. V. Fiacco, Introduction to sensitivity and stability analysis in nonlinear programming, Orlando, Fl: Academic Press, 1983 T. A. Johansen and A. Grancharova, Approximate explicit constrained linear model predictive control via orthogonal search tree, IEEE Trans. Automatic Control, accepted, 23 A. Grancharova and T. A. Johansen, Approximate Explicit Model Predictive Control Incorporating Heuristics, IEEE Conf. Computer Aided Control Design, Glasgow, 22 A. Bemporad and C. Filippi, Suboptimal explicit RHC via approximate quadratic programming, J. Optimization Theory and Applications, Vol. 117, No. 1, 23 23

Part II Approximate explicit constrained nonlinear MPC based on local mp-qp solutions. ffl Nonlinear Constrained MPC formulation ffl Multi-parametric Nonlinear Programming (mp-nlp) ffl An approximate mp-nlp algorithm ffl Example: Compressor surge control 24

Nonlinear Constrained Model Predictive Control Nonlinear dynamic optimization problem J(u[;T];x[;T]) l(x(t);u(t);t)dt + S(x(T );T), Z T subject to the inequality constraints for t 2 [;T] u min» u(t)» u max g(x(t);u(t))» and the ordinary differential equation (ODE) given by x(t) = f(x(t);u(t)) dt with given initial x() 2 X ρ R condition. n d 25

A» Multi-parametric Nonlinear Programming The input u(t) is assumed to be piecewise constant and parameterized by a vector U 2 R p such that u(t) = μ(t; U) 2 R r is piecewise continuous. The solution to the ODE is x(t) = ffi(t; U;x()), and the problem is reformulated as: subject to V (U; x()) l(ffi(t; U;x());μ(t; U);t)dt + S(ffi(T;U;x());T), Z T B 1 C x()) ~G(U; U max U G(U; x()), @ min U U This is an U NLP in parameterized by x(), i.e. an mp-nlp. Solving the mp- NLP amounts to determining an explicit representation of the solution function (x()). Λ U 26

Explicit solution approach mp-lp and mp-qp solvers are well established: U Λ (x) is piecewise linear (PWL). In mp-nlps (unlike mp-qps) there are no structural properties to exploit, such that we seek only an approximation to the solution U Λ (x). Main idea: Locally approximate the mp-nlp with a convex mp-qp. Iteratively sub-partition the space until sufficient agreement between the mp-qp and mp-nlp solutions is achieved. This leads to a PWL approximation, which allows MPC implementation with a PWL function evaluation rather than solving an NLP in real time. 27

Assumptions Consider any x 2 X, and solution candidate U with associated Lagrange multipliers and optimal active set A. Assumption A1. V and G are twice continuously differentiable in a neighborhood of (U ;x ). Assumption A2. The sufficient first- and second-order conditions (Karush- Kuhn-Tucker) for a local minimum at U hold. Assumption A3. Linear independence constraint qualification (LICQ) holds, i.e. the active constraint gradients r U G A (U ; x ) are linearly independent. Assumption A4. Strict complementary slackness holds, i.e. ( ) A >. 28

Basic result (Fiacco, 1983) The optimal solution U Λ (x) depends in a continuous manner on x: Itmaybe meaningful with a PWL approximation to U Λ (x)! Theorem 1. If A1 - A3 holds, then 1. U is a local isolated minimum. 2. For x in a neighborhood of x, there exists a unique continuous function U Λ (x) satisfying U Λ (x ) = U and the sufficient conditions for a local minimum. 3. Assume in addition A4 holds, and let x be in a neighborhood of x. Then U Λ (x) is differentiable and the associated Lagrange multipliers Λ (x) exists, and are unique and continuously differentiable. Finally, the set of active constraints is unchanged, and the active constraint gradients are linearly independent at U Λ (x). 29

I I 1 ; E, A 1 ; T, A max U U U min U Local mp-qp Approximation Minimize V (U; x), 1 2 (U U ) T H (U U ) subject to +(D + F (x x ))(U U ) + Y (x; x ) (U U )» E (x x ) + T G The cost and constraints are defined by the matrices H, r 2 UU V (U ; x ); F, r 2 xu V (U ; x ); D, r U V (U ; x ) r U ~ G(U ; x ) @ r x ~ G(U ; x ) @ ~ G(U ; x ) @ 1 A G, Y (x; x ), V (U ; x ) + r x V (U ; x )(x x )+ 1 2 (x x ) T r 2 xx V (U ; x )(x x ) 3

KKT conditions Let the PWL solution to the mp-qp be denoted U QP (x) with associated Lagrange multipliers QP (x): H UQP (x) U + F (x x ) + D + G T QP (x) = G (U QP (x) U ) E (x x ) T diag( QP (x)) = QP (x) G UQP (x) U E (x x ) T» Assumption A5. For an optimal active set A, the matrix G ;A has full row rank (LICQ) and Z T ;A H Z ;A >, where the columns of Z ;A is a basis for null(g ;A ). 31

QP-solution for fixed active set Theorem 2. Let X be a polyhedral set with x 2 X. If assumption A5 holds, then there exists a unique solution to the mp-qp and the critical region where the solution is optimal is given by the polyhedral set ;A n x 2 X j QP;A (x) ; X, o (U QP;A (x) U )» E (x x ) + T G Hence, U QP (x) = U QP;A (x) and QP (x) = QP;A (x) if x 2 X ;A, and the solution U QP is a continuous, PWL function of x defined on a polyhedral partition of X. 32

Comparing mp-qp and mp-nlp solutions Local quadratic approximation makes sense! Theorem 3. Let x 2 X and suppose there exists a U satisfying assumptions A1 - A4. Then for x in a neighbourhood of x U QP (x) U Λ (x) = O(jjx x jj 2 2 ) QP (x) Λ (x) = O(jjx x jj 2 2 ) 33

max!, T G(U QP (x); x) x2x Approximation criteria How small neighborhoods X ρ X are necessary? The solution error bound is defined as (w is a weight), ρ jw max T (μ(;u QP (x)) μ(;u Λ (x)))j x2x The cost error bound is defined as, " jv max (U QP (x); x) V Λ (x)j x2x The maximum constraint (! violation is is a weight) ffi 34

Optimal cost function bounds Consider the vertices V = fv 1 ;v 2 ; :::; v M g of any bounded polyhedron X. Define the affine function V (x), V x + l as the solution to the LP subject to min V v + l ;l V for all i 2 f1; 2;:::;Mg v i + l V Λ (v i ); V Likewise, define the convex piecewise affine function V Λ (vi ) + r T V Λ (v i )(x v i ) V (x), max i2f1;2;:::;mg Theorem 4. V If G and are jointly convex U (in and x) on the bounded X polyhedron, V (x)» V then (x)» V (x) for all x 2 X Λ. 35

max " 1 V (U QP (x); x) V (x) = x2x max " 2 V (x) V (UQP (x); x) = x2x Optimal cost function bounds Theorem 4 gives the following bounds on the cost function error where " 1» V Λ (x) V (U QP (x); x)» " 2 36

Step 1. Let X := X. Approximate mp-nlp algorithm Step 2. Select x as the Chebychev center of X, by solving an LP. Step 3. Compute U = U Λ (x ) by solving a NLP. Step 4. Compute the local mp-qp problem at (U ;x ). Step 5. Estimate the approximation errors ", ρ and ffi on X. Step 6. If " > ", ρ > ρ, orffi > ffi, then sub-partition X into polyhedral regions. Step 7. Select a new X from the partition. If no further sub-partitioning is needed, go to step 8. Otherwise, repeat Steps 2-7 until the tolerances ", ρ and ffi are respected in all polyhedral regions in the partition of X. Step 8. For all sub-partitions X, solve the mp-qp. 37

Example: Compressor surge control Consider the following 2nd-order compressor model with x 1 being normalized mass flow, x 2 normalized pressure and u normalized mass flow through a close coupled valve in series with the compressor _x 1 = B (Ψ e (x 1 ) x 2 u) _x 2 = 1 B (x 1 Φ(x 2 )) The following compressor and valve characteristics are used Ψ e (x 1 ) = ψ c + H ψ x1 1 1 1:5 + W x1 1 3! :5 W q ) = flsign(x 2 ) jx 2 j 2 Φ(x with fl = :5, B = 1, H = :18, ψ c = :3 and W = :25, (Gravdahl and Egeland, 1997). 38

l(x; u) = ff(x x Λ ) T (x x Λ ) +»u 2 Example, cont d The control objective is to avoid surge, i.e. stabilize the system. This may be formulated as ff;»; ρ with and the x setpoint :4, 1 = xλ 2 fi; an unstable equilibrium point. Λ = :6 corresponds to S(x) = Rv 2 + fi(x x Λ ) T (x x Λ ) We have chosen ff = 1, fi =, and» = :8. The horizon is chosen as T = 12, which is split into N = p = 15 equal-sized intervals. Valve capacity requires the constraint» u(t)» :3 to hold, and the pressure constraint x 2 :4 v avoids operation too far left of the operating point. 39

Partition of the state space x 2.7.6.5.4.3.2.1 R2 R4 R7R15R16R17 R19R2R21 R23R24 R9R1 R26 R25 R3 R3 R7R4 R2 R6 R1 R11 R6 R4R1 R2 R13 R1 R3 R3 R2 R1 R1 R5 R6 R14 R8 R1 R1 R19 R18 R14 R11R7 R2 R4R7 R9 R15 R12 R9 R16 R17 R22 R18 R12 R5 R6 R5 R2 R11 R4 R9R13 R15 R11 R14 R16 R17 R8 R1R11 R12 R7 R1 R3 R2 R6 R4 R6 R4 R6 R1 R6 R9 R1 R7 R3 R2 R2 R2 R1 R3 R5 R7 R6 R1 R12 R1 R5 R16 R12 R14 R17 R9 R7 R5R6 R11 R15 R13 R4 R4 R3 R8R3 R1 R3 R5 R1 R4 R3 R1 R2 R1 R8 R13 R11 R1 R5 R2 R8 R4 R5 R7 R5 R3 R12 R11 R6 R1R2R5 R8 R1 R9 R2 R7 R5 R4 R3 R4 R5R6 R9R6 R9 R2 R5 R4 R5 R2 R4R5 R15 R12 R7 R1 R16 R13 R1 R3 R2 R1 R1 R3 R3 R14 R11 R8 R5 R3 R1 R1 R2 R1 R1 R6 R6 R8 R9 R4 R1 R4 R1 R5 R1 R2 R8 R2 R6R4 R2R3 R1 R5 R2 R4 R2 R3 R8 R3 R2 R3 R3 R12 R11 R6 R5 R1 R7 R4 R4 R1 R7 R6 R11 R8 R9 R5 R2 R5 R3R6 R3 R5 R7 R1 R1 R1 R2 R7 R7 R1 R16 R15 R14 R1 R3 R2 R2 R5 R4 R8 R9 R12 R1 R4 R13 R8 R3 R2 R1 R2 R9 R5 R6 R2 R6 R1 R2 R1 R4 R6 R9 R3 R4 R3 R3R7 R4 R4 R7R8 R3 R6 R1 R11 R5 R2 R4 R2 R4 R6 R5 R8 R1 R5 R7 R6 R1 R5 R4 R7 R5 R1 R5 R2 R3 R7 R8 R1 R1 R2 R1R3R4 R8 R1 R2 R1 R3 R4 R6 R4 R6 R3 R2 R9R8 R5 R3 R11 R2 R4R5R6 R7 R13 R12 R8 R5 R11R3 R9 R5 R3 R1 R6 R14 R7 R1 R1 R9 R4 R13 R4 R15 R17 R6 R1 R2 R4 R12 R16 R2 R5 R2 R1 R7 R4 R18 R19 R3.2.4.6.8 x 1 x 2.7.6.5.4.3.2.1 R5 R36 R1 R49 R25 R99 R85 R98 R1 R86 R42 R45 R73 R66 R64 R65 R11 R97 R93 R95 R92 R94 R39 R67 R71 R44 R63 R9 R88 R91 R96 R89 R43 R41 R68 R69 R72 R7 R15 R78 R81 R79 R76 R83 R84 R82 R8 R48 R77 R74 R75 R5 R13 R24 R57 R58 R6 R59 R62 R34 R51 R3 R29 R31 R35 R33 R54 R55 R53 R56 R27 R28.1.2.3.4.5.6.7.8 x 1 R26 R87 R2 R14 R17 R16 R18 R21 R22 R23 R46 R19 R47 R2 R38 R3 R4 R12 R11 R1 R8 R6 R9 R7 R37 The PWL mapping with 379 regions can be represented as a binary search tree with 329 nodes, of depth 9. Real-time evaluation of the controller therefore requires 49 arithmetic operations, in the worst case, and 1367 numbers needs to be stored in real-time computer memory. 4

Optimal solution and cost Approximate solution Exact solution.3 u (x1,x2).2 * u(x1,x2).3.1.2.1.6.4.2 x.4.2.6.6.8.4.2 2 x2 x1 2.5 2 2 V(x1,x2) V (x1,x2) 2.5 1.5.2.8.6.4 x1 1.5 1 * 1.5.5 1.8 1.6 1.8.4.5.5.6.4.2.2 x2 x1 x2 x 1 41

Simulation with controller activated at t = 2.8.9.35.6.8.7.3.25 x 1 (t).4.2 x 2 (t).6.5.4.3 u(t).2.15.1.2.5.1.2 1 2 3 4 5 6 t 1 2 3 4 5 6 t.5 1 2 3 4 5 6 t 42

Key references T. A. Johansen, On Multi-parametric Nonlinear Programming and Explicit Nonlinear Model Predictive Control, IEEE Conf. Decision and Control, Las Vegas, NV, 22 A. V. Fiacco, Introduction to sensitivity and stability analysis in nonlinear programming, Orlando, Fl: Academic Press, 1983 P. Tøndel, T. A. Johansen and A. Bemporad, An algorithm for multi-parametric quadratic programming and explicit MPC solutions, Automatica, accepted, March, 23 A. Bemporad and M. Morari and V. Dua and E. N. Pistikopoulos, The Explicit Linear Quadratic Regulator for Constrained Systems, Automatica, Vol. 38, pp. 3-2, 22 P. Tøndel, T. A. Johansen and A. Bemporad, Evaluation of Piecewise Affine Control via Binary Search Tree, Automatica, accepted, May, 23 A. Bemporad and C. Filippi, Suboptimal explicit RHC via approximate quadratic programming, J. Optimization Theory and Applications, Vol. 117, No. 1, 23 43

Part III Discussion and summary Explicit linear MPC (including robust/hybrid) ffl Solved using mp-lp/mp-qp/mp-milp ffl Exact piecewise linear representation Solvers efficient for constrained MPC problems with a few states and inputs and reasonable ffl horizons Real-time implementation as binary search tree: μs computation times ffl using fixed-point arithmetic 44

Discussion and summary, cont d Explicit non-linear MPC - emerging mp-nlp theory/technology based on mp- QP/mp-LP and NLP solvers. Bad news: Non-convexity is challenging (not unexpected...) Good news: i) Approximation theory (including stability) seems to be fairly straightforward to establish. ii) Real-time computational complexity of PWL approximation is comparable to linear MPC! 45

Advantages and disadvantages + Efficient real-time computations for small problems + Simple real-time SW implementation: Verifiability, reliability, safety-critical applications, small systems + Fixed-point arithmetic sufficient - Requires more real-time computer memory - Does not scale well to large problems - Reconfigurability is not straightforward Explicit solution is probably most interesting for nonlinear MPC problems: They are often smaller scale, and SW reliability and computational complexity issues are more pronounced. 46

Applications Embedded systems - low-level control, small systems, may be safetycritical (automotive, medical, marine ffl etc.) Control allocation problems - static/quasi-dynamic, over-actuated mechanical systems (aerospace, marine, ffl MEMS,...) Low-level process control - MPC-like functionality at unit-process level ffl replacing PID-loops??? 47